All of lore.kernel.org
 help / color / mirror / Atom feed
* v9.1.0 Infernalis release candidate released
@ 2015-10-13 21:01 Sage Weil
  2015-10-14  0:32 ` Joao Eduardo Luis
                   ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Sage Weil @ 2015-10-13 21:01 UTC (permalink / raw)
  To: ceph-announce, ceph-devel, ceph-users, ceph-maintainers

This is the first Infernalis release candidate.  There have been some
major changes since hammer, and the upgrade process is non-trivial.
Please read carefully.

Getting the release candidate
-----------------------------

The v9.1.0 packages are pushed to the development release repositories::

  http://download.ceph.com/rpm-testing
  http://download.ceph.com/debian-testing

For for info, see::

  http://docs.ceph.com/docs/master/install/get-packages/

Or install with ceph-deploy via::

  ceph-deploy install --testing HOST

Known issues
------------

* librbd and librados ABI compatibility is broken.  Be careful
  installing this RC on client machines (e.g., those running qemu).
  It will be fixed in the final v9.2.0 release.

Major Changes from Hammer
-------------------------

* *General*:
  * Ceph daemons are now managed via systemd (with the exception of
    Ubuntu Trusty, which still uses upstart).
  * Ceph daemons run as 'ceph' user instead root.
  * On Red Hat distros, there is also an SELinux policy.
* *RADOS*:
  * The RADOS cache tier can now proxy write operations to the base
    tier, allowing writes to be handled without forcing migration of
    an object into the cache.
  * The SHEC erasure coding support is no longer flagged as
    experimental. SHEC trades some additional storage space for faster
    repair.
  * There is now a unified queue (and thus prioritization) of client
    IO, recovery, scrubbing, and snapshot trimming.
  * There have been many improvements to low-level repair tooling
    (ceph-objectstore-tool).
  * The internal ObjectStore API has been significantly cleaned up in order
    to faciliate new storage backends like NewStore.
* *RGW*:
  * The Swift API now supports object expiration.
  * There are many Swift API compatibility improvements.
* *RBD*:
  * The ``rbd du`` command shows actual usage (quickly, when
    object-map is enabled).
  * The object-map feature has seen many stability improvements.
  * Object-map and exclusive-lock features can be enabled or disabled
    dynamically.
  * You can now store user metadata and set persistent librbd options
    associated with individual images.
  * The new deep-flatten features allows flattening of a clone and all
    of its snapshots.  (Previously snapshots could not be flattened.)
  * The export-diff command command is now faster (it uses aio).  There is also
    a new fast-diff feature.
  * The --size argument can be specified with a suffix for units
    (e.g., ``--size 64G``).
  * There is a new ``rbd status`` command that, for now, shows who has
    the image open/mapped.
* *CephFS*:
  * You can now rename snapshots.
  * There have been ongoing improvements around administration, diagnostics,
    and the check and repair tools.
  * The caching and revocation of client cache state due to unused
    inodes has been dramatically improved.
  * The ceph-fuse client behaves better on 32-bit hosts.

Distro compatibility
--------------------

We have decided to drop support for many older distributions so that we can
move to a newer compiler toolchain (e.g., C++11).  Although it is still possible
to build Ceph on older distributions by installing backported development tools,
we are not building and publishing release packages for ceph.com.

In particular,

* CentOS 7 or later; we have dropped support for CentOS 6 (and other
  RHEL 6 derivatives, like Scientific Linux 6).
* Debian Jessie 8.x or later; Debian Wheezy 7.x's g++ has incomplete
  support for C++11 (and no systemd).
* Ubuntu Trusty 14.04 or later; Ubuntu Precise 12.04 is no longer
  supported.
* Fedora 22 or later.

Upgrading from Firefly
----------------------

Upgrading directly from Firefly v0.80.z is not possible.  All clusters
must first upgrade to Hammer v0.94.4 or a later v0.94.z release; only
then is it possible to upgrade to Infernalis 9.2.z.

Note that v0.94.4 isn't released yet, but you can upgrade to a test build
from gitbuilder with::

  ceph-deploy install --dev hammer HOST

The v0.94.4 Hammer point release will be out before v9.2.0 Infernalis
is.

Upgrading from Hammer
---------------------

* For all distributions that support systemd (CentOS 7, Fedora, Debian
  Jessie 8.x, OpenSUSE), ceph daemons are now managed using native systemd
  files instead of the legacy sysvinit scripts.  For example,::

    systemctl start ceph.target       # start all daemons
    systemctl status ceph-osd@12      # check status of osd.12

  The main notable distro that is *not* yet using systemd is Ubuntu trusty
  14.04.  (The next Ubuntu LTS, 16.04, will use systemd instead of upstart.)
    
* Ceph daemons now run as user and group ``ceph`` by default.  The
  ceph user has a static UID assigned by Fedora and Debian (also used
  by derivative distributions like RHEL/CentOS and Ubuntu).  On SUSE
  the ceph user will currently get a dynamically assigned UID when the
  user is created.

  If your systems already have a ceph user, upgrading the package will cause
  problems.  We suggest you first remove or rename the existing 'ceph' user
  before upgrading.

  When upgrading, administrators have two options:

   #. Add the following line to ``ceph.conf`` on all hosts::

        setuser match path = /var/lib/ceph/$type/$cluster-$id

      This will make the Ceph daemons run as root (i.e., not drop
      privileges and switch to user ceph) if the daemon's data
      directory is still owned by root.  Newly deployed daemons will
      be created with data owned by user ceph and will run with
      reduced privileges, but upgraded daemons will continue to run as
      root.

   #. Fix the data ownership during the upgrade.  This is the preferred option,
      but is more work.  The process for each host would be to:

      #. Upgrade the ceph package.  This creates the ceph user and group.  For
	 example::

	   ceph-deploy install --stable infernalis HOST

      #. Stop the daemon(s).::

	   service ceph stop           # fedora, centos, rhel, debian
	   stop ceph-all               # ubuntu
	   
      #. Fix the ownership::

	   chown -R ceph:ceph /var/lib/ceph

      #. Restart the daemon(s).::

	   start ceph-all                # ubuntu
	   systemctl start ceph.target   # debian, centos, fedora, rhel

* The on-disk format for the experimental KeyValueStore OSD backend has
  changed.  You will need to remove any OSDs using that backend before you
  upgrade any test clusters that use it.

Upgrade notes
-------------

* When a pool quota is reached, librados operations now block indefinitely,
  the same way they do when the cluster fills up.  (Previously they would return
  -ENOSPC).  By default, a full cluster or pool will now block.  If your
  librados application can handle ENOSPC or EDQUOT errors gracefully, you can
  get error returns instead by using the new librados OPERATION_FULL_TRY flag.

Notable changes
---------------

NOTE: These notes are somewhat abbreviated while we find a less
time-consuming process for generating them.

* build: C++11 now supported
* build: many cmake improvements
* build: OSX build fixes (Yan, Zheng)
* build: remove rest-bench
* ceph-disk: many fixes (Loic Dachary)
* ceph-disk: support for multipath devices (Loic Dachary)
* ceph-fuse: mostly behave on 32-bit hosts (Yan, Zheng)
* ceph-objectstore-tool: many improvements (David Zafman)
* common: bufferlist performance tuning (Piotr Dalek, Sage Weil)
* common: make mutex more efficient
* common: some async compression infrastructure (Haomai Wang)
* librados: add FULL_TRY and FULL_FORCE flags for dealing with full clusters or pools (Sage Weil)
* librados: fix notify completion race (#13114 Sage Weil)
* librados, libcephfs: randomize client nonces (Josh Durgin)
* librados: pybind: fix binary omap values (Robin H. Johnson)
* librbd: fix reads larger than the cache size (Lu Shi)
* librbd: metadata filter fixes (Haomai Wang)
* librbd: use write_full when possible (Zhiqiang Wang)
* mds: avoid emitting cap warnigns before evicting session (John Spray)
* mds: fix expected holes in journal objects (#13167 Yan, Zheng)
* mds: fix SnapServer crash on deleted pool (John Spray)
* mds: many fixes (Yan, Zheng, John Spray, Greg Farnum)
* mon: add cache over MonitorDBStore (Kefu Chai)
* mon: 'ceph osd metadata' can dump all osds (Haomai Wang)
* mon: detect kv backend failures (Sage Weil)
* mon: fix CRUSH map test for new pools (Sage Weil)
* mon: fix min_last_epoch_clean tracking (Kefu Chai)
* mon: misc scaling fixes (Sage Weil)
* mon: streamline session handling, fix memory leaks (Sage Weil)
* mon: upgrades must pass through hammer (Sage Weil)
* msg/async: many fixes (Haomai Wang)
* osd: cache proxy-write support (Zhiqiang Wang, Samuel Just)
* osd: configure promotion based on write recency (Zhiqiang Wang)
* osd: don't send dup MMonGetOSDMap requests (Sage Weil, Kefu Chai)
* osd: erasure-code: fix SHEC floating point bug (#12936 Loic Dachary)
* osd: erasure-code: update to ISA-L 2.14 (Yuan Zhou)
* osd: fix hitset object naming to use GMT (Kefu Chai)
* osd: fix misc memory leaks (Sage Weil)
* osd: fix peek_queue locking in FileStore (Xinze Chi)
* osd: fix promotion vs full cache tier (Samuel Just)
* osd: fix replay requeue when pg is still activating (#13116 Samuel Just)
* osd: fix scrub stat bugs (Sage Weil, Samuel Just)
* osd: force promotion for ops EC can't handle (Zhiqiang Wang)
* osd: improve behavior on machines with large memory pages (Steve Capper)
* osd: merge multiple setattr calls into a setattrs call (Xinxin Shu)
* osd: newstore prototype (Sage Weil)
* osd: ObjectStore internal API refactor (Sage Weil)
* osd: SHEC no longer experimental
* osd: throttle evict ops (Yunchuan Wen)
* osd: upgrades must pass through hammer (Sage Weil)
* osd: use SEEK_HOLE / SEEK_DATA for sparse copy (Xinxin Shu)
* rbd: rbd-replay-prep and rbd-replay improvements (Jason Dillaman)
* rgw: expose the number of unhealthy workers through admin socket (Guang Yang)
* rgw: fix casing of Content-Type header (Robin H. Johnson)
* rgw: fix decoding of X-Object-Manifest from GET on Swift DLO (Radslow Rzarzynski)
* rgw: fix sysvinit script
* rgw: fix sysvinit script w/ multiple instances (Sage Weil, Pavan Rallabhandi)
* rgw: improve handling of already removed buckets in expirer (Radoslaw Rzarzynski)
* rgw: log to /var/log/ceph instead of /var/log/radosgw
* rgw: rework X-Trans-Id header to be conform with Swift API (Radoslaw Rzarzynski)
* rgw: s3 encoding-type for get bucket (Jeff Weber)
* rgw: set max buckets per user in ceph.conf (Vikhyat Umrao)
* rgw: support for Swift expiration API (Radoslaw Rzarzynski, Yehuda Sadeh)
* rgw: user rm is idempotent (Orit Wasserman)
* selinux policy (Boris Ranto, Milan Broz)
* systemd: many fixes (Sage Weil, Owen Synge, Boris Ranto, Dan van der Ster)
* systemd: run daemons as user ceph

Getting Ceph
------------

* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-9.1.0.tar.gz
* For packages, see http://ceph.com/docs/master/install/get-packages
* For ceph-deploy, see http://ceph.com/docs/master/install/install-ceph-deploy

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: v9.1.0 Infernalis release candidate released
  2015-10-13 21:01 v9.1.0 Infernalis release candidate released Sage Weil
@ 2015-10-14  0:32 ` Joao Eduardo Luis
       [not found] ` <alpine.DEB.2.00.1510131356001.27022-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 16+ messages in thread
From: Joao Eduardo Luis @ 2015-10-14  0:32 UTC (permalink / raw)
  To: Sage Weil, ceph-announce, ceph-devel, ceph-users, ceph-maintainers

On 13/10/15 22:01, Sage Weil wrote:
> * *RADOS*:
>   * The RADOS cache tier can now proxy write operations to the base
>     tier, allowing writes to be handled without forcing migration of
>     an object into the cache.
>   * The SHEC erasure coding support is no longer flagged as
>     experimental. SHEC trades some additional storage space for faster
>     repair.
>   * There is now a unified queue (and thus prioritization) of client
>     IO, recovery, scrubbing, and snapshot trimming.
>   * There have been many improvements to low-level repair tooling
>     (ceph-objectstore-tool).
>   * The internal ObjectStore API has been significantly cleaned up in order
>     to faciliate new storage backends like NewStore.

It may also be worth to mention that we dropped a few options from the
monitor that people may have customized in their configurations (I guess
we forgot to add them to the PendingReleaseNotes).

These options are:

- mon_lease_renew_interval (default: 3)
- mon_lease_ack_timeout (default: 10)
- mon_accept_timeout (default: 10)

If you are using these in your configuration, please use instead

- mon_lease_renew_interval_factor (default: 0.6)
- mon_lease_ack_timeout_factor (default: 2.0)
- mon_accept_timeout_factor (default: 2.0)

These are now used as a factor of 'mon_lease'. If you also have
'mon_lease' (default: 5) adjusted and your previous options match the
new factors defaults, you have nothing else to do but to drop the old
options from your configuration file. Otherwise, please consider
adjusting the factors to match whatever is working for you.

  -Joao

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: v9.1.0 Infernalis release candidate released
       [not found] ` <alpine.DEB.2.00.1510131356001.27022-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
@ 2015-10-14  4:51   ` Goncalo Borges
  2015-10-14  8:11     ` [ceph-users] " Dan van der Ster
  0 siblings, 1 reply; 16+ messages in thread
From: Goncalo Borges @ 2015-10-14  4:51 UTC (permalink / raw)
  To: Sage Weil, ceph-announce-Qp0mS5GaXlQ,
	ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-Qp0mS5GaXlQ
  Cc: ceph-maintainers-Qp0mS5GaXlQ

Hi Sage...

I've seen that the rh6 derivatives have been ruled out.

This is a problem in our case since the OS choice in our systems is, 
somehow, imposed by CERN. The experiments software is certified for SL6 
and the transition to SL7 will take some time.

This is kind of a showstopper specially if we can't deploy clients in 
SL6 / Centos6.

Is there any alternative?

TIA
Goncalo


On 10/14/2015 08:01 AM, Sage Weil wrote:
> This is the first Infernalis release candidate.  There have been some
> major changes since hammer, and the upgrade process is non-trivial.
> Please read carefully.
>
> Getting the release candidate
> -----------------------------
>
> The v9.1.0 packages are pushed to the development release repositories::
>
>    http://download.ceph.com/rpm-testing
>    http://download.ceph.com/debian-testing
>
> For for info, see::
>
>    http://docs.ceph.com/docs/master/install/get-packages/
>
> Or install with ceph-deploy via::
>
>    ceph-deploy install --testing HOST
>
> Known issues
> ------------
>
> * librbd and librados ABI compatibility is broken.  Be careful
>    installing this RC on client machines (e.g., those running qemu).
>    It will be fixed in the final v9.2.0 release.
>
> Major Changes from Hammer
> -------------------------
>
> * *General*:
>    * Ceph daemons are now managed via systemd (with the exception of
>      Ubuntu Trusty, which still uses upstart).
>    * Ceph daemons run as 'ceph' user instead root.
>    * On Red Hat distros, there is also an SELinux policy.
> * *RADOS*:
>    * The RADOS cache tier can now proxy write operations to the base
>      tier, allowing writes to be handled without forcing migration of
>      an object into the cache.
>    * The SHEC erasure coding support is no longer flagged as
>      experimental. SHEC trades some additional storage space for faster
>      repair.
>    * There is now a unified queue (and thus prioritization) of client
>      IO, recovery, scrubbing, and snapshot trimming.
>    * There have been many improvements to low-level repair tooling
>      (ceph-objectstore-tool).
>    * The internal ObjectStore API has been significantly cleaned up in order
>      to faciliate new storage backends like NewStore.
> * *RGW*:
>    * The Swift API now supports object expiration.
>    * There are many Swift API compatibility improvements.
> * *RBD*:
>    * The ``rbd du`` command shows actual usage (quickly, when
>      object-map is enabled).
>    * The object-map feature has seen many stability improvements.
>    * Object-map and exclusive-lock features can be enabled or disabled
>      dynamically.
>    * You can now store user metadata and set persistent librbd options
>      associated with individual images.
>    * The new deep-flatten features allows flattening of a clone and all
>      of its snapshots.  (Previously snapshots could not be flattened.)
>    * The export-diff command command is now faster (it uses aio).  There is also
>      a new fast-diff feature.
>    * The --size argument can be specified with a suffix for units
>      (e.g., ``--size 64G``).
>    * There is a new ``rbd status`` command that, for now, shows who has
>      the image open/mapped.
> * *CephFS*:
>    * You can now rename snapshots.
>    * There have been ongoing improvements around administration, diagnostics,
>      and the check and repair tools.
>    * The caching and revocation of client cache state due to unused
>      inodes has been dramatically improved.
>    * The ceph-fuse client behaves better on 32-bit hosts.
>
> Distro compatibility
> --------------------
>
> We have decided to drop support for many older distributions so that we can
> move to a newer compiler toolchain (e.g., C++11).  Although it is still possible
> to build Ceph on older distributions by installing backported development tools,
> we are not building and publishing release packages for ceph.com.
>
> In particular,
>
> * CentOS 7 or later; we have dropped support for CentOS 6 (and other
>    RHEL 6 derivatives, like Scientific Linux 6).
> * Debian Jessie 8.x or later; Debian Wheezy 7.x's g++ has incomplete
>    support for C++11 (and no systemd).
> * Ubuntu Trusty 14.04 or later; Ubuntu Precise 12.04 is no longer
>    supported.
> * Fedora 22 or later.
>
> Upgrading from Firefly
> ----------------------
>
> Upgrading directly from Firefly v0.80.z is not possible.  All clusters
> must first upgrade to Hammer v0.94.4 or a later v0.94.z release; only
> then is it possible to upgrade to Infernalis 9.2.z.
>
> Note that v0.94.4 isn't released yet, but you can upgrade to a test build
> from gitbuilder with::
>
>    ceph-deploy install --dev hammer HOST
>
> The v0.94.4 Hammer point release will be out before v9.2.0 Infernalis
> is.
>
> Upgrading from Hammer
> ---------------------
>
> * For all distributions that support systemd (CentOS 7, Fedora, Debian
>    Jessie 8.x, OpenSUSE), ceph daemons are now managed using native systemd
>    files instead of the legacy sysvinit scripts.  For example,::
>
>      systemctl start ceph.target       # start all daemons
>      systemctl status ceph-osd@12      # check status of osd.12
>
>    The main notable distro that is *not* yet using systemd is Ubuntu trusty
>    14.04.  (The next Ubuntu LTS, 16.04, will use systemd instead of upstart.)
>      
> * Ceph daemons now run as user and group ``ceph`` by default.  The
>    ceph user has a static UID assigned by Fedora and Debian (also used
>    by derivative distributions like RHEL/CentOS and Ubuntu).  On SUSE
>    the ceph user will currently get a dynamically assigned UID when the
>    user is created.
>
>    If your systems already have a ceph user, upgrading the package will cause
>    problems.  We suggest you first remove or rename the existing 'ceph' user
>    before upgrading.
>
>    When upgrading, administrators have two options:
>
>     #. Add the following line to ``ceph.conf`` on all hosts::
>
>          setuser match path = /var/lib/ceph/$type/$cluster-$id
>
>        This will make the Ceph daemons run as root (i.e., not drop
>        privileges and switch to user ceph) if the daemon's data
>        directory is still owned by root.  Newly deployed daemons will
>        be created with data owned by user ceph and will run with
>        reduced privileges, but upgraded daemons will continue to run as
>        root.
>
>     #. Fix the data ownership during the upgrade.  This is the preferred option,
>        but is more work.  The process for each host would be to:
>
>        #. Upgrade the ceph package.  This creates the ceph user and group.  For
> 	 example::
>
> 	   ceph-deploy install --stable infernalis HOST
>
>        #. Stop the daemon(s).::
>
> 	   service ceph stop           # fedora, centos, rhel, debian
> 	   stop ceph-all               # ubuntu
> 	
>        #. Fix the ownership::
>
> 	   chown -R ceph:ceph /var/lib/ceph
>
>        #. Restart the daemon(s).::
>
> 	   start ceph-all                # ubuntu
> 	   systemctl start ceph.target   # debian, centos, fedora, rhel
>
> * The on-disk format for the experimental KeyValueStore OSD backend has
>    changed.  You will need to remove any OSDs using that backend before you
>    upgrade any test clusters that use it.
>
> Upgrade notes
> -------------
>
> * When a pool quota is reached, librados operations now block indefinitely,
>    the same way they do when the cluster fills up.  (Previously they would return
>    -ENOSPC).  By default, a full cluster or pool will now block.  If your
>    librados application can handle ENOSPC or EDQUOT errors gracefully, you can
>    get error returns instead by using the new librados OPERATION_FULL_TRY flag.
>
> Notable changes
> ---------------
>
> NOTE: These notes are somewhat abbreviated while we find a less
> time-consuming process for generating them.
>
> * build: C++11 now supported
> * build: many cmake improvements
> * build: OSX build fixes (Yan, Zheng)
> * build: remove rest-bench
> * ceph-disk: many fixes (Loic Dachary)
> * ceph-disk: support for multipath devices (Loic Dachary)
> * ceph-fuse: mostly behave on 32-bit hosts (Yan, Zheng)
> * ceph-objectstore-tool: many improvements (David Zafman)
> * common: bufferlist performance tuning (Piotr Dalek, Sage Weil)
> * common: make mutex more efficient
> * common: some async compression infrastructure (Haomai Wang)
> * librados: add FULL_TRY and FULL_FORCE flags for dealing with full clusters or pools (Sage Weil)
> * librados: fix notify completion race (#13114 Sage Weil)
> * librados, libcephfs: randomize client nonces (Josh Durgin)
> * librados: pybind: fix binary omap values (Robin H. Johnson)
> * librbd: fix reads larger than the cache size (Lu Shi)
> * librbd: metadata filter fixes (Haomai Wang)
> * librbd: use write_full when possible (Zhiqiang Wang)
> * mds: avoid emitting cap warnigns before evicting session (John Spray)
> * mds: fix expected holes in journal objects (#13167 Yan, Zheng)
> * mds: fix SnapServer crash on deleted pool (John Spray)
> * mds: many fixes (Yan, Zheng, John Spray, Greg Farnum)
> * mon: add cache over MonitorDBStore (Kefu Chai)
> * mon: 'ceph osd metadata' can dump all osds (Haomai Wang)
> * mon: detect kv backend failures (Sage Weil)
> * mon: fix CRUSH map test for new pools (Sage Weil)
> * mon: fix min_last_epoch_clean tracking (Kefu Chai)
> * mon: misc scaling fixes (Sage Weil)
> * mon: streamline session handling, fix memory leaks (Sage Weil)
> * mon: upgrades must pass through hammer (Sage Weil)
> * msg/async: many fixes (Haomai Wang)
> * osd: cache proxy-write support (Zhiqiang Wang, Samuel Just)
> * osd: configure promotion based on write recency (Zhiqiang Wang)
> * osd: don't send dup MMonGetOSDMap requests (Sage Weil, Kefu Chai)
> * osd: erasure-code: fix SHEC floating point bug (#12936 Loic Dachary)
> * osd: erasure-code: update to ISA-L 2.14 (Yuan Zhou)
> * osd: fix hitset object naming to use GMT (Kefu Chai)
> * osd: fix misc memory leaks (Sage Weil)
> * osd: fix peek_queue locking in FileStore (Xinze Chi)
> * osd: fix promotion vs full cache tier (Samuel Just)
> * osd: fix replay requeue when pg is still activating (#13116 Samuel Just)
> * osd: fix scrub stat bugs (Sage Weil, Samuel Just)
> * osd: force promotion for ops EC can't handle (Zhiqiang Wang)
> * osd: improve behavior on machines with large memory pages (Steve Capper)
> * osd: merge multiple setattr calls into a setattrs call (Xinxin Shu)
> * osd: newstore prototype (Sage Weil)
> * osd: ObjectStore internal API refactor (Sage Weil)
> * osd: SHEC no longer experimental
> * osd: throttle evict ops (Yunchuan Wen)
> * osd: upgrades must pass through hammer (Sage Weil)
> * osd: use SEEK_HOLE / SEEK_DATA for sparse copy (Xinxin Shu)
> * rbd: rbd-replay-prep and rbd-replay improvements (Jason Dillaman)
> * rgw: expose the number of unhealthy workers through admin socket (Guang Yang)
> * rgw: fix casing of Content-Type header (Robin H. Johnson)
> * rgw: fix decoding of X-Object-Manifest from GET on Swift DLO (Radslow Rzarzynski)
> * rgw: fix sysvinit script
> * rgw: fix sysvinit script w/ multiple instances (Sage Weil, Pavan Rallabhandi)
> * rgw: improve handling of already removed buckets in expirer (Radoslaw Rzarzynski)
> * rgw: log to /var/log/ceph instead of /var/log/radosgw
> * rgw: rework X-Trans-Id header to be conform with Swift API (Radoslaw Rzarzynski)
> * rgw: s3 encoding-type for get bucket (Jeff Weber)
> * rgw: set max buckets per user in ceph.conf (Vikhyat Umrao)
> * rgw: support for Swift expiration API (Radoslaw Rzarzynski, Yehuda Sadeh)
> * rgw: user rm is idempotent (Orit Wasserman)
> * selinux policy (Boris Ranto, Milan Broz)
> * systemd: many fixes (Sage Weil, Owen Synge, Boris Ranto, Dan van der Ster)
> * systemd: run daemons as user ceph
>
> Getting Ceph
> ------------
>
> * Git at git://github.com/ceph/ceph.git
> * Tarball at http://download.ceph.com/tarballs/ceph-9.1.0.tar.gz
> * For packages, see http://ceph.com/docs/master/install/get-packages
> * For ceph-deploy, see http://ceph.com/docs/master/install/install-ceph-deploy
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney, NSW  2006
T: +61 2 93511937

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Ceph-announce] v9.1.0 Infernalis release candidate released
  2015-10-13 21:01 v9.1.0 Infernalis release candidate released Sage Weil
  2015-10-14  0:32 ` Joao Eduardo Luis
       [not found] ` <alpine.DEB.2.00.1510131356001.27022-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
@ 2015-10-14  8:06 ` Gaudenz Steinlin
  2015-10-14 12:37   ` Sage Weil
  2015-10-14 17:24 ` Deneau, Tom
  3 siblings, 1 reply; 16+ messages in thread
From: Gaudenz Steinlin @ 2015-10-14  8:06 UTC (permalink / raw)
  To: Sage Weil, ceph-devel, ceph-maintainers; +Cc: James Page

[-- Attachment #1: Type: text/plain, Size: 1218 bytes --]


Hi

Sage Weil <sage@newdream.net> writes:
> Upgrading from Firefly
> ----------------------
>
> Upgrading directly from Firefly v0.80.z is not possible.  All clusters
> must first upgrade to Hammer v0.94.4 or a later v0.94.z release; only
> then is it possible to upgrade to Infernalis 9.2.z.
>

What's the exact issue with upgrading directly from Firefly to
Infernalis? Is it just that you can't run mixed Firefly and Infernalis
versions at the same time during an upgrade or is also the on disk
format changing so that Infernals daemons won't understand the Firefly
data structures?

This is bad for Debian as the current stable release contains Firefly
and the next Debian stable will likely contain the LTS release scheduled
after Infernalis. So there won't be a Debian stable with Hammer. But
upgrades are important and we would really like to have an upgrade path
From one Debian release to the next.

Is there anything that can be done about this? At least having an
offline upgrade option would be nice, if an online upgrade is not
possible. How are other distributions planing to handle this? Ubuntu
will likely have a similar problem at least for LTS to LTS upgrades.

Gaudenz

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 810 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [ceph-users] v9.1.0 Infernalis release candidate released
  2015-10-14  4:51   ` Goncalo Borges
@ 2015-10-14  8:11     ` Dan van der Ster
  2015-10-14 12:30       ` Sage Weil
  0 siblings, 1 reply; 16+ messages in thread
From: Dan van der Ster @ 2015-10-14  8:11 UTC (permalink / raw)
  To: Goncalo Borges
  Cc: Sage Weil, ceph-announce, ceph-devel, ceph-users, ceph-maintainers

Hi Goncalo,

On Wed, Oct 14, 2015 at 6:51 AM, Goncalo Borges
<goncalo@physics.usyd.edu.au> wrote:
> Hi Sage...
>
> I've seen that the rh6 derivatives have been ruled out.
>
> This is a problem in our case since the OS choice in our systems is,
> somehow, imposed by CERN. The experiments software is certified for SL6 and
> the transition to SL7 will take some time.

Are you accessing Ceph directly from "physics" machines? Here at CERN
we run CentOS 7 on the native clients (e.g. qemu-kvm hosts) and by the
time we upgrade to Infernalis the servers will all be CentOS 7 as
well. Batch nodes running SL6 don't (currently) talk to Ceph directly
(in the future they might talk to Ceph-based storage via an xroot
gateway). But if there are use-cases then perhaps we could find a
place to build and distributing the newer ceph clients.

There's a ML ceph-talk@cern.ch where we could take this discussion.
Mail me if have trouble joining that e-Group.

Cheers, Dan
CERN IT-DSS

> This is kind of a showstopper specially if we can't deploy clients in SL6 /
> Centos6.
>
> Is there any alternative?
>
> TIA
> Goncalo
>
>
>
> On 10/14/2015 08:01 AM, Sage Weil wrote:
>>
>> This is the first Infernalis release candidate.  There have been some
>> major changes since hammer, and the upgrade process is non-trivial.
>> Please read carefully.
>>
>> Getting the release candidate
>> -----------------------------
>>
>> The v9.1.0 packages are pushed to the development release repositories::
>>
>>    http://download.ceph.com/rpm-testing
>>    http://download.ceph.com/debian-testing
>>
>> For for info, see::
>>
>>    http://docs.ceph.com/docs/master/install/get-packages/
>>
>> Or install with ceph-deploy via::
>>
>>    ceph-deploy install --testing HOST
>>
>> Known issues
>> ------------
>>
>> * librbd and librados ABI compatibility is broken.  Be careful
>>    installing this RC on client machines (e.g., those running qemu).
>>    It will be fixed in the final v9.2.0 release.
>>
>> Major Changes from Hammer
>> -------------------------
>>
>> * *General*:
>>    * Ceph daemons are now managed via systemd (with the exception of
>>      Ubuntu Trusty, which still uses upstart).
>>    * Ceph daemons run as 'ceph' user instead root.
>>    * On Red Hat distros, there is also an SELinux policy.
>> * *RADOS*:
>>    * The RADOS cache tier can now proxy write operations to the base
>>      tier, allowing writes to be handled without forcing migration of
>>      an object into the cache.
>>    * The SHEC erasure coding support is no longer flagged as
>>      experimental. SHEC trades some additional storage space for faster
>>      repair.
>>    * There is now a unified queue (and thus prioritization) of client
>>      IO, recovery, scrubbing, and snapshot trimming.
>>    * There have been many improvements to low-level repair tooling
>>      (ceph-objectstore-tool).
>>    * The internal ObjectStore API has been significantly cleaned up in
>> order
>>      to faciliate new storage backends like NewStore.
>> * *RGW*:
>>    * The Swift API now supports object expiration.
>>    * There are many Swift API compatibility improvements.
>> * *RBD*:
>>    * The ``rbd du`` command shows actual usage (quickly, when
>>      object-map is enabled).
>>    * The object-map feature has seen many stability improvements.
>>    * Object-map and exclusive-lock features can be enabled or disabled
>>      dynamically.
>>    * You can now store user metadata and set persistent librbd options
>>      associated with individual images.
>>    * The new deep-flatten features allows flattening of a clone and all
>>      of its snapshots.  (Previously snapshots could not be flattened.)
>>    * The export-diff command command is now faster (it uses aio).  There
>> is also
>>      a new fast-diff feature.
>>    * The --size argument can be specified with a suffix for units
>>      (e.g., ``--size 64G``).
>>    * There is a new ``rbd status`` command that, for now, shows who has
>>      the image open/mapped.
>> * *CephFS*:
>>    * You can now rename snapshots.
>>    * There have been ongoing improvements around administration,
>> diagnostics,
>>      and the check and repair tools.
>>    * The caching and revocation of client cache state due to unused
>>      inodes has been dramatically improved.
>>    * The ceph-fuse client behaves better on 32-bit hosts.
>>
>> Distro compatibility
>> --------------------
>>
>> We have decided to drop support for many older distributions so that we
>> can
>> move to a newer compiler toolchain (e.g., C++11).  Although it is still
>> possible
>> to build Ceph on older distributions by installing backported development
>> tools,
>> we are not building and publishing release packages for ceph.com.
>>
>> In particular,
>>
>> * CentOS 7 or later; we have dropped support for CentOS 6 (and other
>>    RHEL 6 derivatives, like Scientific Linux 6).
>> * Debian Jessie 8.x or later; Debian Wheezy 7.x's g++ has incomplete
>>    support for C++11 (and no systemd).
>> * Ubuntu Trusty 14.04 or later; Ubuntu Precise 12.04 is no longer
>>    supported.
>> * Fedora 22 or later.
>>
>> Upgrading from Firefly
>> ----------------------
>>
>> Upgrading directly from Firefly v0.80.z is not possible.  All clusters
>> must first upgrade to Hammer v0.94.4 or a later v0.94.z release; only
>> then is it possible to upgrade to Infernalis 9.2.z.
>>
>> Note that v0.94.4 isn't released yet, but you can upgrade to a test build
>> from gitbuilder with::
>>
>>    ceph-deploy install --dev hammer HOST
>>
>> The v0.94.4 Hammer point release will be out before v9.2.0 Infernalis
>> is.
>>
>> Upgrading from Hammer
>> ---------------------
>>
>> * For all distributions that support systemd (CentOS 7, Fedora, Debian
>>    Jessie 8.x, OpenSUSE), ceph daemons are now managed using native
>> systemd
>>    files instead of the legacy sysvinit scripts.  For example,::
>>
>>      systemctl start ceph.target       # start all daemons
>>      systemctl status ceph-osd@12      # check status of osd.12
>>
>>    The main notable distro that is *not* yet using systemd is Ubuntu
>> trusty
>>    14.04.  (The next Ubuntu LTS, 16.04, will use systemd instead of
>> upstart.)
>>      * Ceph daemons now run as user and group ``ceph`` by default.  The
>>    ceph user has a static UID assigned by Fedora and Debian (also used
>>    by derivative distributions like RHEL/CentOS and Ubuntu).  On SUSE
>>    the ceph user will currently get a dynamically assigned UID when the
>>    user is created.
>>
>>    If your systems already have a ceph user, upgrading the package will
>> cause
>>    problems.  We suggest you first remove or rename the existing 'ceph'
>> user
>>    before upgrading.
>>
>>    When upgrading, administrators have two options:
>>
>>     #. Add the following line to ``ceph.conf`` on all hosts::
>>
>>          setuser match path = /var/lib/ceph/$type/$cluster-$id
>>
>>        This will make the Ceph daemons run as root (i.e., not drop
>>        privileges and switch to user ceph) if the daemon's data
>>        directory is still owned by root.  Newly deployed daemons will
>>        be created with data owned by user ceph and will run with
>>        reduced privileges, but upgraded daemons will continue to run as
>>        root.
>>
>>     #. Fix the data ownership during the upgrade.  This is the preferred
>> option,
>>        but is more work.  The process for each host would be to:
>>
>>        #. Upgrade the ceph package.  This creates the ceph user and group.
>> For
>>          example::
>>
>>            ceph-deploy install --stable infernalis HOST
>>
>>        #. Stop the daemon(s).::
>>
>>            service ceph stop           # fedora, centos, rhel, debian
>>            stop ceph-all               # ubuntu
>>
>>        #. Fix the ownership::
>>
>>            chown -R ceph:ceph /var/lib/ceph
>>
>>        #. Restart the daemon(s).::
>>
>>            start ceph-all                # ubuntu
>>            systemctl start ceph.target   # debian, centos, fedora, rhel
>>
>> * The on-disk format for the experimental KeyValueStore OSD backend has
>>    changed.  You will need to remove any OSDs using that backend before
>> you
>>    upgrade any test clusters that use it.
>>
>> Upgrade notes
>> -------------
>>
>> * When a pool quota is reached, librados operations now block
>> indefinitely,
>>    the same way they do when the cluster fills up.  (Previously they would
>> return
>>    -ENOSPC).  By default, a full cluster or pool will now block.  If your
>>    librados application can handle ENOSPC or EDQUOT errors gracefully, you
>> can
>>    get error returns instead by using the new librados OPERATION_FULL_TRY
>> flag.
>>
>> Notable changes
>> ---------------
>>
>> NOTE: These notes are somewhat abbreviated while we find a less
>> time-consuming process for generating them.
>>
>> * build: C++11 now supported
>> * build: many cmake improvements
>> * build: OSX build fixes (Yan, Zheng)
>> * build: remove rest-bench
>> * ceph-disk: many fixes (Loic Dachary)
>> * ceph-disk: support for multipath devices (Loic Dachary)
>> * ceph-fuse: mostly behave on 32-bit hosts (Yan, Zheng)
>> * ceph-objectstore-tool: many improvements (David Zafman)
>> * common: bufferlist performance tuning (Piotr Dalek, Sage Weil)
>> * common: make mutex more efficient
>> * common: some async compression infrastructure (Haomai Wang)
>> * librados: add FULL_TRY and FULL_FORCE flags for dealing with full
>> clusters or pools (Sage Weil)
>> * librados: fix notify completion race (#13114 Sage Weil)
>> * librados, libcephfs: randomize client nonces (Josh Durgin)
>> * librados: pybind: fix binary omap values (Robin H. Johnson)
>> * librbd: fix reads larger than the cache size (Lu Shi)
>> * librbd: metadata filter fixes (Haomai Wang)
>> * librbd: use write_full when possible (Zhiqiang Wang)
>> * mds: avoid emitting cap warnigns before evicting session (John Spray)
>> * mds: fix expected holes in journal objects (#13167 Yan, Zheng)
>> * mds: fix SnapServer crash on deleted pool (John Spray)
>> * mds: many fixes (Yan, Zheng, John Spray, Greg Farnum)
>> * mon: add cache over MonitorDBStore (Kefu Chai)
>> * mon: 'ceph osd metadata' can dump all osds (Haomai Wang)
>> * mon: detect kv backend failures (Sage Weil)
>> * mon: fix CRUSH map test for new pools (Sage Weil)
>> * mon: fix min_last_epoch_clean tracking (Kefu Chai)
>> * mon: misc scaling fixes (Sage Weil)
>> * mon: streamline session handling, fix memory leaks (Sage Weil)
>> * mon: upgrades must pass through hammer (Sage Weil)
>> * msg/async: many fixes (Haomai Wang)
>> * osd: cache proxy-write support (Zhiqiang Wang, Samuel Just)
>> * osd: configure promotion based on write recency (Zhiqiang Wang)
>> * osd: don't send dup MMonGetOSDMap requests (Sage Weil, Kefu Chai)
>> * osd: erasure-code: fix SHEC floating point bug (#12936 Loic Dachary)
>> * osd: erasure-code: update to ISA-L 2.14 (Yuan Zhou)
>> * osd: fix hitset object naming to use GMT (Kefu Chai)
>> * osd: fix misc memory leaks (Sage Weil)
>> * osd: fix peek_queue locking in FileStore (Xinze Chi)
>> * osd: fix promotion vs full cache tier (Samuel Just)
>> * osd: fix replay requeue when pg is still activating (#13116 Samuel Just)
>> * osd: fix scrub stat bugs (Sage Weil, Samuel Just)
>> * osd: force promotion for ops EC can't handle (Zhiqiang Wang)
>> * osd: improve behavior on machines with large memory pages (Steve Capper)
>> * osd: merge multiple setattr calls into a setattrs call (Xinxin Shu)
>> * osd: newstore prototype (Sage Weil)
>> * osd: ObjectStore internal API refactor (Sage Weil)
>> * osd: SHEC no longer experimental
>> * osd: throttle evict ops (Yunchuan Wen)
>> * osd: upgrades must pass through hammer (Sage Weil)
>> * osd: use SEEK_HOLE / SEEK_DATA for sparse copy (Xinxin Shu)
>> * rbd: rbd-replay-prep and rbd-replay improvements (Jason Dillaman)
>> * rgw: expose the number of unhealthy workers through admin socket (Guang
>> Yang)
>> * rgw: fix casing of Content-Type header (Robin H. Johnson)
>> * rgw: fix decoding of X-Object-Manifest from GET on Swift DLO (Radslow
>> Rzarzynski)
>> * rgw: fix sysvinit script
>> * rgw: fix sysvinit script w/ multiple instances (Sage Weil, Pavan
>> Rallabhandi)
>> * rgw: improve handling of already removed buckets in expirer (Radoslaw
>> Rzarzynski)
>> * rgw: log to /var/log/ceph instead of /var/log/radosgw
>> * rgw: rework X-Trans-Id header to be conform with Swift API (Radoslaw
>> Rzarzynski)
>> * rgw: s3 encoding-type for get bucket (Jeff Weber)
>> * rgw: set max buckets per user in ceph.conf (Vikhyat Umrao)
>> * rgw: support for Swift expiration API (Radoslaw Rzarzynski, Yehuda
>> Sadeh)
>> * rgw: user rm is idempotent (Orit Wasserman)
>> * selinux policy (Boris Ranto, Milan Broz)
>> * systemd: many fixes (Sage Weil, Owen Synge, Boris Ranto, Dan van der
>> Ster)
>> * systemd: run daemons as user ceph
>>
>> Getting Ceph
>> ------------
>>
>> * Git at git://github.com/ceph/ceph.git
>> * Tarball at http://download.ceph.com/tarballs/ceph-9.1.0.tar.gz
>> * For packages, see http://ceph.com/docs/master/install/get-packages
>> * For ceph-deploy, see
>> http://ceph.com/docs/master/install/install-ceph-deploy
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> --
> Goncalo Borges
> Research Computing
> ARC Centre of Excellence for Particle Physics at the Terascale
> School of Physics A28 | University of Sydney, NSW  2006
> T: +61 2 93511937
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [ceph-users] v9.1.0 Infernalis release candidate released
  2015-10-14  8:11     ` [ceph-users] " Dan van der Ster
@ 2015-10-14 12:30       ` Sage Weil
       [not found]         ` <alpine.DEB.2.00.1510140527120.6589-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
  0 siblings, 1 reply; 16+ messages in thread
From: Sage Weil @ 2015-10-14 12:30 UTC (permalink / raw)
  To: Dan van der Ster; +Cc: Goncalo Borges, ceph-devel, ceph-users, ceph-maintainers

On Wed, 14 Oct 2015, Dan van der Ster wrote:
> Hi Goncalo,
> 
> On Wed, Oct 14, 2015 at 6:51 AM, Goncalo Borges
> <goncalo@physics.usyd.edu.au> wrote:
> > Hi Sage...
> >
> > I've seen that the rh6 derivatives have been ruled out.
> >
> > This is a problem in our case since the OS choice in our systems is,
> > somehow, imposed by CERN. The experiments software is certified for SL6 and
> > the transition to SL7 will take some time.
> 
> Are you accessing Ceph directly from "physics" machines? Here at CERN
> we run CentOS 7 on the native clients (e.g. qemu-kvm hosts) and by the
> time we upgrade to Infernalis the servers will all be CentOS 7 as
> well. Batch nodes running SL6 don't (currently) talk to Ceph directly
> (in the future they might talk to Ceph-based storage via an xroot
> gateway). But if there are use-cases then perhaps we could find a
> place to build and distributing the newer ceph clients.
> 
> There's a ML ceph-talk@cern.ch where we could take this discussion.
> Mail me if have trouble joining that e-Group.

Also note that it *is* possible to build infernalis on el6, but it 
requires a lot more effort... enough that we would rather spend our time 
elsewhere (at least as far as ceph.com packages go).  If someone else 
wants to do that work we'd be happy to take patches to update the and/or 
release process.

IIRC the thing that eventually made me stop going down this patch was the 
fact that the newer gcc had a runtime dependency on the newer libstdc++, 
which wasn't part of the base distro... which means we'd need also to 
publish those packages in the ceph.com repos, or users would have to 
add some backport repo or ppa or whatever to get things running.  Bleh.

sage


> 
> Cheers, Dan
> CERN IT-DSS
> 
> > This is kind of a showstopper specially if we can't deploy clients in SL6 /
> > Centos6.
> >
> > Is there any alternative?
> >
> > TIA
> > Goncalo
> >
> >
> >
> > On 10/14/2015 08:01 AM, Sage Weil wrote:
> >>
> >> This is the first Infernalis release candidate.  There have been some
> >> major changes since hammer, and the upgrade process is non-trivial.
> >> Please read carefully.
> >>
> >> Getting the release candidate
> >> -----------------------------
> >>
> >> The v9.1.0 packages are pushed to the development release repositories::
> >>
> >>    http://download.ceph.com/rpm-testing
> >>    http://download.ceph.com/debian-testing
> >>
> >> For for info, see::
> >>
> >>    http://docs.ceph.com/docs/master/install/get-packages/
> >>
> >> Or install with ceph-deploy via::
> >>
> >>    ceph-deploy install --testing HOST
> >>
> >> Known issues
> >> ------------
> >>
> >> * librbd and librados ABI compatibility is broken.  Be careful
> >>    installing this RC on client machines (e.g., those running qemu).
> >>    It will be fixed in the final v9.2.0 release.
> >>
> >> Major Changes from Hammer
> >> -------------------------
> >>
> >> * *General*:
> >>    * Ceph daemons are now managed via systemd (with the exception of
> >>      Ubuntu Trusty, which still uses upstart).
> >>    * Ceph daemons run as 'ceph' user instead root.
> >>    * On Red Hat distros, there is also an SELinux policy.
> >> * *RADOS*:
> >>    * The RADOS cache tier can now proxy write operations to the base
> >>      tier, allowing writes to be handled without forcing migration of
> >>      an object into the cache.
> >>    * The SHEC erasure coding support is no longer flagged as
> >>      experimental. SHEC trades some additional storage space for faster
> >>      repair.
> >>    * There is now a unified queue (and thus prioritization) of client
> >>      IO, recovery, scrubbing, and snapshot trimming.
> >>    * There have been many improvements to low-level repair tooling
> >>      (ceph-objectstore-tool).
> >>    * The internal ObjectStore API has been significantly cleaned up in
> >> order
> >>      to faciliate new storage backends like NewStore.
> >> * *RGW*:
> >>    * The Swift API now supports object expiration.
> >>    * There are many Swift API compatibility improvements.
> >> * *RBD*:
> >>    * The ``rbd du`` command shows actual usage (quickly, when
> >>      object-map is enabled).
> >>    * The object-map feature has seen many stability improvements.
> >>    * Object-map and exclusive-lock features can be enabled or disabled
> >>      dynamically.
> >>    * You can now store user metadata and set persistent librbd options
> >>      associated with individual images.
> >>    * The new deep-flatten features allows flattening of a clone and all
> >>      of its snapshots.  (Previously snapshots could not be flattened.)
> >>    * The export-diff command command is now faster (it uses aio).  There
> >> is also
> >>      a new fast-diff feature.
> >>    * The --size argument can be specified with a suffix for units
> >>      (e.g., ``--size 64G``).
> >>    * There is a new ``rbd status`` command that, for now, shows who has
> >>      the image open/mapped.
> >> * *CephFS*:
> >>    * You can now rename snapshots.
> >>    * There have been ongoing improvements around administration,
> >> diagnostics,
> >>      and the check and repair tools.
> >>    * The caching and revocation of client cache state due to unused
> >>      inodes has been dramatically improved.
> >>    * The ceph-fuse client behaves better on 32-bit hosts.
> >>
> >> Distro compatibility
> >> --------------------
> >>
> >> We have decided to drop support for many older distributions so that we
> >> can
> >> move to a newer compiler toolchain (e.g., C++11).  Although it is still
> >> possible
> >> to build Ceph on older distributions by installing backported development
> >> tools,
> >> we are not building and publishing release packages for ceph.com.
> >>
> >> In particular,
> >>
> >> * CentOS 7 or later; we have dropped support for CentOS 6 (and other
> >>    RHEL 6 derivatives, like Scientific Linux 6).
> >> * Debian Jessie 8.x or later; Debian Wheezy 7.x's g++ has incomplete
> >>    support for C++11 (and no systemd).
> >> * Ubuntu Trusty 14.04 or later; Ubuntu Precise 12.04 is no longer
> >>    supported.
> >> * Fedora 22 or later.
> >>
> >> Upgrading from Firefly
> >> ----------------------
> >>
> >> Upgrading directly from Firefly v0.80.z is not possible.  All clusters
> >> must first upgrade to Hammer v0.94.4 or a later v0.94.z release; only
> >> then is it possible to upgrade to Infernalis 9.2.z.
> >>
> >> Note that v0.94.4 isn't released yet, but you can upgrade to a test build
> >> from gitbuilder with::
> >>
> >>    ceph-deploy install --dev hammer HOST
> >>
> >> The v0.94.4 Hammer point release will be out before v9.2.0 Infernalis
> >> is.
> >>
> >> Upgrading from Hammer
> >> ---------------------
> >>
> >> * For all distributions that support systemd (CentOS 7, Fedora, Debian
> >>    Jessie 8.x, OpenSUSE), ceph daemons are now managed using native
> >> systemd
> >>    files instead of the legacy sysvinit scripts.  For example,::
> >>
> >>      systemctl start ceph.target       # start all daemons
> >>      systemctl status ceph-osd@12      # check status of osd.12
> >>
> >>    The main notable distro that is *not* yet using systemd is Ubuntu
> >> trusty
> >>    14.04.  (The next Ubuntu LTS, 16.04, will use systemd instead of
> >> upstart.)
> >>      * Ceph daemons now run as user and group ``ceph`` by default.  The
> >>    ceph user has a static UID assigned by Fedora and Debian (also used
> >>    by derivative distributions like RHEL/CentOS and Ubuntu).  On SUSE
> >>    the ceph user will currently get a dynamically assigned UID when the
> >>    user is created.
> >>
> >>    If your systems already have a ceph user, upgrading the package will
> >> cause
> >>    problems.  We suggest you first remove or rename the existing 'ceph'
> >> user
> >>    before upgrading.
> >>
> >>    When upgrading, administrators have two options:
> >>
> >>     #. Add the following line to ``ceph.conf`` on all hosts::
> >>
> >>          setuser match path = /var/lib/ceph/$type/$cluster-$id
> >>
> >>        This will make the Ceph daemons run as root (i.e., not drop
> >>        privileges and switch to user ceph) if the daemon's data
> >>        directory is still owned by root.  Newly deployed daemons will
> >>        be created with data owned by user ceph and will run with
> >>        reduced privileges, but upgraded daemons will continue to run as
> >>        root.
> >>
> >>     #. Fix the data ownership during the upgrade.  This is the preferred
> >> option,
> >>        but is more work.  The process for each host would be to:
> >>
> >>        #. Upgrade the ceph package.  This creates the ceph user and group.
> >> For
> >>          example::
> >>
> >>            ceph-deploy install --stable infernalis HOST
> >>
> >>        #. Stop the daemon(s).::
> >>
> >>            service ceph stop           # fedora, centos, rhel, debian
> >>            stop ceph-all               # ubuntu
> >>
> >>        #. Fix the ownership::
> >>
> >>            chown -R ceph:ceph /var/lib/ceph
> >>
> >>        #. Restart the daemon(s).::
> >>
> >>            start ceph-all                # ubuntu
> >>            systemctl start ceph.target   # debian, centos, fedora, rhel
> >>
> >> * The on-disk format for the experimental KeyValueStore OSD backend has
> >>    changed.  You will need to remove any OSDs using that backend before
> >> you
> >>    upgrade any test clusters that use it.
> >>
> >> Upgrade notes
> >> -------------
> >>
> >> * When a pool quota is reached, librados operations now block
> >> indefinitely,
> >>    the same way they do when the cluster fills up.  (Previously they would
> >> return
> >>    -ENOSPC).  By default, a full cluster or pool will now block.  If your
> >>    librados application can handle ENOSPC or EDQUOT errors gracefully, you
> >> can
> >>    get error returns instead by using the new librados OPERATION_FULL_TRY
> >> flag.
> >>
> >> Notable changes
> >> ---------------
> >>
> >> NOTE: These notes are somewhat abbreviated while we find a less
> >> time-consuming process for generating them.
> >>
> >> * build: C++11 now supported
> >> * build: many cmake improvements
> >> * build: OSX build fixes (Yan, Zheng)
> >> * build: remove rest-bench
> >> * ceph-disk: many fixes (Loic Dachary)
> >> * ceph-disk: support for multipath devices (Loic Dachary)
> >> * ceph-fuse: mostly behave on 32-bit hosts (Yan, Zheng)
> >> * ceph-objectstore-tool: many improvements (David Zafman)
> >> * common: bufferlist performance tuning (Piotr Dalek, Sage Weil)
> >> * common: make mutex more efficient
> >> * common: some async compression infrastructure (Haomai Wang)
> >> * librados: add FULL_TRY and FULL_FORCE flags for dealing with full
> >> clusters or pools (Sage Weil)
> >> * librados: fix notify completion race (#13114 Sage Weil)
> >> * librados, libcephfs: randomize client nonces (Josh Durgin)
> >> * librados: pybind: fix binary omap values (Robin H. Johnson)
> >> * librbd: fix reads larger than the cache size (Lu Shi)
> >> * librbd: metadata filter fixes (Haomai Wang)
> >> * librbd: use write_full when possible (Zhiqiang Wang)
> >> * mds: avoid emitting cap warnigns before evicting session (John Spray)
> >> * mds: fix expected holes in journal objects (#13167 Yan, Zheng)
> >> * mds: fix SnapServer crash on deleted pool (John Spray)
> >> * mds: many fixes (Yan, Zheng, John Spray, Greg Farnum)
> >> * mon: add cache over MonitorDBStore (Kefu Chai)
> >> * mon: 'ceph osd metadata' can dump all osds (Haomai Wang)
> >> * mon: detect kv backend failures (Sage Weil)
> >> * mon: fix CRUSH map test for new pools (Sage Weil)
> >> * mon: fix min_last_epoch_clean tracking (Kefu Chai)
> >> * mon: misc scaling fixes (Sage Weil)
> >> * mon: streamline session handling, fix memory leaks (Sage Weil)
> >> * mon: upgrades must pass through hammer (Sage Weil)
> >> * msg/async: many fixes (Haomai Wang)
> >> * osd: cache proxy-write support (Zhiqiang Wang, Samuel Just)
> >> * osd: configure promotion based on write recency (Zhiqiang Wang)
> >> * osd: don't send dup MMonGetOSDMap requests (Sage Weil, Kefu Chai)
> >> * osd: erasure-code: fix SHEC floating point bug (#12936 Loic Dachary)
> >> * osd: erasure-code: update to ISA-L 2.14 (Yuan Zhou)
> >> * osd: fix hitset object naming to use GMT (Kefu Chai)
> >> * osd: fix misc memory leaks (Sage Weil)
> >> * osd: fix peek_queue locking in FileStore (Xinze Chi)
> >> * osd: fix promotion vs full cache tier (Samuel Just)
> >> * osd: fix replay requeue when pg is still activating (#13116 Samuel Just)
> >> * osd: fix scrub stat bugs (Sage Weil, Samuel Just)
> >> * osd: force promotion for ops EC can't handle (Zhiqiang Wang)
> >> * osd: improve behavior on machines with large memory pages (Steve Capper)
> >> * osd: merge multiple setattr calls into a setattrs call (Xinxin Shu)
> >> * osd: newstore prototype (Sage Weil)
> >> * osd: ObjectStore internal API refactor (Sage Weil)
> >> * osd: SHEC no longer experimental
> >> * osd: throttle evict ops (Yunchuan Wen)
> >> * osd: upgrades must pass through hammer (Sage Weil)
> >> * osd: use SEEK_HOLE / SEEK_DATA for sparse copy (Xinxin Shu)
> >> * rbd: rbd-replay-prep and rbd-replay improvements (Jason Dillaman)
> >> * rgw: expose the number of unhealthy workers through admin socket (Guang
> >> Yang)
> >> * rgw: fix casing of Content-Type header (Robin H. Johnson)
> >> * rgw: fix decoding of X-Object-Manifest from GET on Swift DLO (Radslow
> >> Rzarzynski)
> >> * rgw: fix sysvinit script
> >> * rgw: fix sysvinit script w/ multiple instances (Sage Weil, Pavan
> >> Rallabhandi)
> >> * rgw: improve handling of already removed buckets in expirer (Radoslaw
> >> Rzarzynski)
> >> * rgw: log to /var/log/ceph instead of /var/log/radosgw
> >> * rgw: rework X-Trans-Id header to be conform with Swift API (Radoslaw
> >> Rzarzynski)
> >> * rgw: s3 encoding-type for get bucket (Jeff Weber)
> >> * rgw: set max buckets per user in ceph.conf (Vikhyat Umrao)
> >> * rgw: support for Swift expiration API (Radoslaw Rzarzynski, Yehuda
> >> Sadeh)
> >> * rgw: user rm is idempotent (Orit Wasserman)
> >> * selinux policy (Boris Ranto, Milan Broz)
> >> * systemd: many fixes (Sage Weil, Owen Synge, Boris Ranto, Dan van der
> >> Ster)
> >> * systemd: run daemons as user ceph
> >>
> >> Getting Ceph
> >> ------------
> >>
> >> * Git at git://github.com/ceph/ceph.git
> >> * Tarball at http://download.ceph.com/tarballs/ceph-9.1.0.tar.gz
> >> * For packages, see http://ceph.com/docs/master/install/get-packages
> >> * For ceph-deploy, see
> >> http://ceph.com/docs/master/install/install-ceph-deploy
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> > --
> > Goncalo Borges
> > Research Computing
> > ARC Centre of Excellence for Particle Physics at the Terascale
> > School of Physics A28 | University of Sydney, NSW  2006
> > T: +61 2 93511937
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Ceph-announce] v9.1.0 Infernalis release candidate released
  2015-10-14  8:06 ` [Ceph-announce] " Gaudenz Steinlin
@ 2015-10-14 12:37   ` Sage Weil
  0 siblings, 0 replies; 16+ messages in thread
From: Sage Weil @ 2015-10-14 12:37 UTC (permalink / raw)
  To: Gaudenz Steinlin; +Cc: ceph-devel, ceph-maintainers, James Page

On Wed, 14 Oct 2015, Gaudenz Steinlin wrote:
> 
> Hi
> 
> Sage Weil <sage@newdream.net> writes:
> > Upgrading from Firefly
> > ----------------------
> >
> > Upgrading directly from Firefly v0.80.z is not possible.  All clusters
> > must first upgrade to Hammer v0.94.4 or a later v0.94.z release; only
> > then is it possible to upgrade to Infernalis 9.2.z.
> >
> 
> What's the exact issue with upgrading directly from Firefly to
> Infernalis? Is it just that you can't run mixed Firefly and Infernalis
> versions at the same time during an upgrade or is also the on disk
> format changing so that Infernals daemons won't understand the Firefly
> data structures?

There are two issues:

1- By requiring that each ceph-osd process start up on hammer, we were 
able to remove a bunch of compatibility/conversion code for supporting 
pre-hammer on-disk formats.  This was just nice cleanup.

2- A running post-hammer OSD can't talk to a pre-hammer OSD.  Partly it 
was an opportunity to remove a bunch of compat code, but in the end we 
also had to make fixes to hammer itself to make it interoperate with 
infernalis (hence the requirement that you run v0.94.4+, not v0.94.z).

> This is bad for Debian as the current stable release contains Firefly
> and the next Debian stable will likely contain the LTS release scheduled
> after Infernalis. So there won't be a Debian stable with Hammer. But
> upgrades are important and we would really like to have an upgrade path
> From one Debian release to the next.
> 
> Is there anything that can be done about this? At least having an
> offline upgrade option would be nice, if an online upgrade is not
> possible. How are other distributions planing to handle this? Ubuntu
> will likely have a similar problem at least for LTS to LTS upgrades.

That does suck... I definitely didn't consider this possibility.

One possibility is to revert #1 above so that you can upgrade from 
firefly -> jewel as long as you stop all OSDs.  That rules out an online 
upgrade but at least it can be done with debian packages.  It's the patch 
series merged by cbe101e7db4265dfaeae3b85d5e7f266c6a1e9d5 that is the 
problem.

The other possiblity is to ask users to upgrade via ceph.com packages.  

Hrm...
sage


^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: v9.1.0 Infernalis release candidate released
  2015-10-13 21:01 v9.1.0 Infernalis release candidate released Sage Weil
                   ` (2 preceding siblings ...)
  2015-10-14  8:06 ` [Ceph-announce] " Gaudenz Steinlin
@ 2015-10-14 17:24 ` Deneau, Tom
  2015-10-14 17:39   ` Sage Weil
  3 siblings, 1 reply; 16+ messages in thread
From: Deneau, Tom @ 2015-10-14 17:24 UTC (permalink / raw)
  To: Sage Weil, ceph-devel

I tried an rpmbuild on Fedora21 from the tarball which seemed to work ok.
But having trouble doing "ceph-deploy --overwrite-conf mon create-initial" with 9.1.0".
This is using ceph-deploy version 1.5.24.
Is this part of the "needs Fedora 22 or later" story?

-- Tom

[myhost][DEBUG ] create a done file to avoid re-doing the mon deployment
[myhost][DEBUG ] create the init path if it does not exist
[myhost][DEBUG ] locating the `service` executable...
[myhost][INFO  ] Running command: /usr/sbin/service ceph -c /etc/ceph/ceph.conf start mon.myhost
[myhost][WARNIN] The service command supports only basic LSB actions (start, stop, restart, try-restart, reload, force-reload, sta\
tus). For other actions, please try to use systemctl.
[myhost][ERROR ] RuntimeError: command returned non-zero exit status: 2
[ceph_deploy.mon][ERROR ] Failed to execute command: /usr/sbin/service ceph -c /etc/ceph/ceph.conf start mon.myhost
[ceph_deploy][ERROR ] GenericError: Failed to create 1 monitors


> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-
> owner@vger.kernel.org] On Behalf Of Sage Weil
> Sent: Tuesday, October 13, 2015 4:02 PM
> To: ceph-announce@ceph.com; ceph-devel@vger.kernel.org; ceph-
> users@ceph.com; ceph-maintainers@ceph.com
> Subject: v9.1.0 Infernalis release candidate released
> 
> This is the first Infernalis release candidate.  There have been some
> major changes since hammer, and the upgrade process is non-trivial.
> Please read carefully.
> 
> Getting the release candidate
> -----------------------------
> 
> The v9.1.0 packages are pushed to the development release repositories::
> 
>   http://download.ceph.com/rpm-testing
>   http://download.ceph.com/debian-testing
> 
> For for info, see::
> 
>   http://docs.ceph.com/docs/master/install/get-packages/
> 
> Or install with ceph-deploy via::
> 
>   ceph-deploy install --testing HOST
> 
> Known issues
> ------------
> 
> * librbd and librados ABI compatibility is broken.  Be careful
>   installing this RC on client machines (e.g., those running qemu).
>   It will be fixed in the final v9.2.0 release.
> 
> Major Changes from Hammer
> -------------------------
> 
> * *General*:
>   * Ceph daemons are now managed via systemd (with the exception of
>     Ubuntu Trusty, which still uses upstart).
>   * Ceph daemons run as 'ceph' user instead root.
>   * On Red Hat distros, there is also an SELinux policy.
> * *RADOS*:
>   * The RADOS cache tier can now proxy write operations to the base
>     tier, allowing writes to be handled without forcing migration of
>     an object into the cache.
>   * The SHEC erasure coding support is no longer flagged as
>     experimental. SHEC trades some additional storage space for faster
>     repair.
>   * There is now a unified queue (and thus prioritization) of client
>     IO, recovery, scrubbing, and snapshot trimming.
>   * There have been many improvements to low-level repair tooling
>     (ceph-objectstore-tool).
>   * The internal ObjectStore API has been significantly cleaned up in
> order
>     to faciliate new storage backends like NewStore.
> * *RGW*:
>   * The Swift API now supports object expiration.
>   * There are many Swift API compatibility improvements.
> * *RBD*:
>   * The ``rbd du`` command shows actual usage (quickly, when
>     object-map is enabled).
>   * The object-map feature has seen many stability improvements.
>   * Object-map and exclusive-lock features can be enabled or disabled
>     dynamically.
>   * You can now store user metadata and set persistent librbd options
>     associated with individual images.
>   * The new deep-flatten features allows flattening of a clone and all
>     of its snapshots.  (Previously snapshots could not be flattened.)
>   * The export-diff command command is now faster (it uses aio).  There is
> also
>     a new fast-diff feature.
>   * The --size argument can be specified with a suffix for units
>     (e.g., ``--size 64G``).
>   * There is a new ``rbd status`` command that, for now, shows who has
>     the image open/mapped.
> * *CephFS*:
>   * You can now rename snapshots.
>   * There have been ongoing improvements around administration,
> diagnostics,
>     and the check and repair tools.
>   * The caching and revocation of client cache state due to unused
>     inodes has been dramatically improved.
>   * The ceph-fuse client behaves better on 32-bit hosts.
> 
> Distro compatibility
> --------------------
> 
> We have decided to drop support for many older distributions so that we
> can move to a newer compiler toolchain (e.g., C++11).  Although it is
> still possible to build Ceph on older distributions by installing
> backported development tools, we are not building and publishing release
> packages for ceph.com.
> 
> In particular,
> 
> * CentOS 7 or later; we have dropped support for CentOS 6 (and other
>   RHEL 6 derivatives, like Scientific Linux 6).
> * Debian Jessie 8.x or later; Debian Wheezy 7.x's g++ has incomplete
>   support for C++11 (and no systemd).
> * Ubuntu Trusty 14.04 or later; Ubuntu Precise 12.04 is no longer
>   supported.
> * Fedora 22 or later.
> 
> Upgrading from Firefly
> ----------------------
> 
> Upgrading directly from Firefly v0.80.z is not possible.  All clusters
> must first upgrade to Hammer v0.94.4 or a later v0.94.z release; only then
> is it possible to upgrade to Infernalis 9.2.z.
> 
> Note that v0.94.4 isn't released yet, but you can upgrade to a test build
> from gitbuilder with::
> 
>   ceph-deploy install --dev hammer HOST
> 
> The v0.94.4 Hammer point release will be out before v9.2.0 Infernalis is.
> 
> Upgrading from Hammer
> ---------------------
> 
> * For all distributions that support systemd (CentOS 7, Fedora, Debian
>   Jessie 8.x, OpenSUSE), ceph daemons are now managed using native systemd
>   files instead of the legacy sysvinit scripts.  For example,::
> 
>     systemctl start ceph.target       # start all daemons
>     systemctl status ceph-osd@12      # check status of osd.12
> 
>   The main notable distro that is *not* yet using systemd is Ubuntu trusty
>   14.04.  (The next Ubuntu LTS, 16.04, will use systemd instead of
> upstart.)
> 
> * Ceph daemons now run as user and group ``ceph`` by default.  The
>   ceph user has a static UID assigned by Fedora and Debian (also used
>   by derivative distributions like RHEL/CentOS and Ubuntu).  On SUSE
>   the ceph user will currently get a dynamically assigned UID when the
>   user is created.
> 
>   If your systems already have a ceph user, upgrading the package will
> cause
>   problems.  We suggest you first remove or rename the existing 'ceph'
> user
>   before upgrading.
> 
>   When upgrading, administrators have two options:
> 
>    #. Add the following line to ``ceph.conf`` on all hosts::
> 
>         setuser match path = /var/lib/ceph/$type/$cluster-$id
> 
>       This will make the Ceph daemons run as root (i.e., not drop
>       privileges and switch to user ceph) if the daemon's data
>       directory is still owned by root.  Newly deployed daemons will
>       be created with data owned by user ceph and will run with
>       reduced privileges, but upgraded daemons will continue to run as
>       root.
> 
>    #. Fix the data ownership during the upgrade.  This is the preferred
> option,
>       but is more work.  The process for each host would be to:
> 
>       #. Upgrade the ceph package.  This creates the ceph user and group.
> For
> 	 example::
> 
> 	   ceph-deploy install --stable infernalis HOST
> 
>       #. Stop the daemon(s).::
> 
> 	   service ceph stop           # fedora, centos, rhel, debian
> 	   stop ceph-all               # ubuntu
> 
>       #. Fix the ownership::
> 
> 	   chown -R ceph:ceph /var/lib/ceph
> 
>       #. Restart the daemon(s).::
> 
> 	   start ceph-all                # ubuntu
> 	   systemctl start ceph.target   # debian, centos, fedora, rhel
> 
> * The on-disk format for the experimental KeyValueStore OSD backend has
>   changed.  You will need to remove any OSDs using that backend before you
>   upgrade any test clusters that use it.
> 
> Upgrade notes
> -------------
> 
> * When a pool quota is reached, librados operations now block
> indefinitely,
>   the same way they do when the cluster fills up.  (Previously they would
> return
>   -ENOSPC).  By default, a full cluster or pool will now block.  If your
>   librados application can handle ENOSPC or EDQUOT errors gracefully, you
> can
>   get error returns instead by using the new librados OPERATION_FULL_TRY
> flag.
> 
> Notable changes
> ---------------
> 
> NOTE: These notes are somewhat abbreviated while we find a less time-
> consuming process for generating them.
> 
> * build: C++11 now supported
> * build: many cmake improvements
> * build: OSX build fixes (Yan, Zheng)
> * build: remove rest-bench
> * ceph-disk: many fixes (Loic Dachary)
> * ceph-disk: support for multipath devices (Loic Dachary)
> * ceph-fuse: mostly behave on 32-bit hosts (Yan, Zheng)
> * ceph-objectstore-tool: many improvements (David Zafman)
> * common: bufferlist performance tuning (Piotr Dalek, Sage Weil)
> * common: make mutex more efficient
> * common: some async compression infrastructure (Haomai Wang)
> * librados: add FULL_TRY and FULL_FORCE flags for dealing with full
> clusters or pools (Sage Weil)
> * librados: fix notify completion race (#13114 Sage Weil)
> * librados, libcephfs: randomize client nonces (Josh Durgin)
> * librados: pybind: fix binary omap values (Robin H. Johnson)
> * librbd: fix reads larger than the cache size (Lu Shi)
> * librbd: metadata filter fixes (Haomai Wang)
> * librbd: use write_full when possible (Zhiqiang Wang)
> * mds: avoid emitting cap warnigns before evicting session (John Spray)
> * mds: fix expected holes in journal objects (#13167 Yan, Zheng)
> * mds: fix SnapServer crash on deleted pool (John Spray)
> * mds: many fixes (Yan, Zheng, John Spray, Greg Farnum)
> * mon: add cache over MonitorDBStore (Kefu Chai)
> * mon: 'ceph osd metadata' can dump all osds (Haomai Wang)
> * mon: detect kv backend failures (Sage Weil)
> * mon: fix CRUSH map test for new pools (Sage Weil)
> * mon: fix min_last_epoch_clean tracking (Kefu Chai)
> * mon: misc scaling fixes (Sage Weil)
> * mon: streamline session handling, fix memory leaks (Sage Weil)
> * mon: upgrades must pass through hammer (Sage Weil)
> * msg/async: many fixes (Haomai Wang)
> * osd: cache proxy-write support (Zhiqiang Wang, Samuel Just)
> * osd: configure promotion based on write recency (Zhiqiang Wang)
> * osd: don't send dup MMonGetOSDMap requests (Sage Weil, Kefu Chai)
> * osd: erasure-code: fix SHEC floating point bug (#12936 Loic Dachary)
> * osd: erasure-code: update to ISA-L 2.14 (Yuan Zhou)
> * osd: fix hitset object naming to use GMT (Kefu Chai)
> * osd: fix misc memory leaks (Sage Weil)
> * osd: fix peek_queue locking in FileStore (Xinze Chi)
> * osd: fix promotion vs full cache tier (Samuel Just)
> * osd: fix replay requeue when pg is still activating (#13116 Samuel Just)
> * osd: fix scrub stat bugs (Sage Weil, Samuel Just)
> * osd: force promotion for ops EC can't handle (Zhiqiang Wang)
> * osd: improve behavior on machines with large memory pages (Steve Capper)
> * osd: merge multiple setattr calls into a setattrs call (Xinxin Shu)
> * osd: newstore prototype (Sage Weil)
> * osd: ObjectStore internal API refactor (Sage Weil)
> * osd: SHEC no longer experimental
> * osd: throttle evict ops (Yunchuan Wen)
> * osd: upgrades must pass through hammer (Sage Weil)
> * osd: use SEEK_HOLE / SEEK_DATA for sparse copy (Xinxin Shu)
> * rbd: rbd-replay-prep and rbd-replay improvements (Jason Dillaman)
> * rgw: expose the number of unhealthy workers through admin socket (Guang
> Yang)
> * rgw: fix casing of Content-Type header (Robin H. Johnson)
> * rgw: fix decoding of X-Object-Manifest from GET on Swift DLO (Radslow
> Rzarzynski)
> * rgw: fix sysvinit script
> * rgw: fix sysvinit script w/ multiple instances (Sage Weil, Pavan
> Rallabhandi)
> * rgw: improve handling of already removed buckets in expirer (Radoslaw
> Rzarzynski)
> * rgw: log to /var/log/ceph instead of /var/log/radosgw
> * rgw: rework X-Trans-Id header to be conform with Swift API (Radoslaw
> Rzarzynski)
> * rgw: s3 encoding-type for get bucket (Jeff Weber)
> * rgw: set max buckets per user in ceph.conf (Vikhyat Umrao)
> * rgw: support for Swift expiration API (Radoslaw Rzarzynski, Yehuda
> Sadeh)
> * rgw: user rm is idempotent (Orit Wasserman)
> * selinux policy (Boris Ranto, Milan Broz)
> * systemd: many fixes (Sage Weil, Owen Synge, Boris Ranto, Dan van der
> Ster)
> * systemd: run daemons as user ceph
> 
> Getting Ceph
> ------------
> 
> * Git at git://github.com/ceph/ceph.git
> * Tarball at http://download.ceph.com/tarballs/ceph-9.1.0.tar.gz
> * For packages, see http://ceph.com/docs/master/install/get-packages
> * For ceph-deploy, see http://ceph.com/docs/master/install/install-ceph-
> deploy
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org More majordomo info at
> http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: v9.1.0 Infernalis release candidate released
  2015-10-14 17:24 ` Deneau, Tom
@ 2015-10-14 17:39   ` Sage Weil
  2015-10-14 20:39     ` Deneau, Tom
  0 siblings, 1 reply; 16+ messages in thread
From: Sage Weil @ 2015-10-14 17:39 UTC (permalink / raw)
  To: Deneau, Tom; +Cc: ceph-devel

On Wed, 14 Oct 2015, Deneau, Tom wrote:
> I tried an rpmbuild on Fedora21 from the tarball which seemed to work ok.
> But having trouble doing "ceph-deploy --overwrite-conf mon create-initial" with 9.1.0".
> This is using ceph-deploy version 1.5.24.
> Is this part of the "needs Fedora 22 or later" story?

Yeah I think so, but it's probably mostly a "tested fc22 and it worked" 
situation.  THis is probably what is failing:

https://github.com/ceph/ceph-deploy/blob/master/ceph_deploy/hosts/fedora/__init__.py#L21

So maybe the specfile isn't using systemd for fc21?

sage


> 
> -- Tom
> 
> [myhost][DEBUG ] create a done file to avoid re-doing the mon deployment
> [myhost][DEBUG ] create the init path if it does not exist
> [myhost][DEBUG ] locating the `service` executable...
> [myhost][INFO  ] Running command: /usr/sbin/service ceph -c /etc/ceph/ceph.conf start mon.myhost
> [myhost][WARNIN] The service command supports only basic LSB actions (start, stop, restart, try-restart, reload, force-reload, sta\
> tus). For other actions, please try to use systemctl.
> [myhost][ERROR ] RuntimeError: command returned non-zero exit status: 2
> [ceph_deploy.mon][ERROR ] Failed to execute command: /usr/sbin/service ceph -c /etc/ceph/ceph.conf start mon.myhost
> [ceph_deploy][ERROR ] GenericError: Failed to create 1 monitors
> 
> 
> > -----Original Message-----
> > From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-
> > owner@vger.kernel.org] On Behalf Of Sage Weil
> > Sent: Tuesday, October 13, 2015 4:02 PM
> > To: ceph-announce@ceph.com; ceph-devel@vger.kernel.org; ceph-
> > users@ceph.com; ceph-maintainers@ceph.com
> > Subject: v9.1.0 Infernalis release candidate released
> > 
> > This is the first Infernalis release candidate.  There have been some
> > major changes since hammer, and the upgrade process is non-trivial.
> > Please read carefully.
> > 
> > Getting the release candidate
> > -----------------------------
> > 
> > The v9.1.0 packages are pushed to the development release repositories::
> > 
> >   http://download.ceph.com/rpm-testing
> >   http://download.ceph.com/debian-testing
> > 
> > For for info, see::
> > 
> >   http://docs.ceph.com/docs/master/install/get-packages/
> > 
> > Or install with ceph-deploy via::
> > 
> >   ceph-deploy install --testing HOST
> > 
> > Known issues
> > ------------
> > 
> > * librbd and librados ABI compatibility is broken.  Be careful
> >   installing this RC on client machines (e.g., those running qemu).
> >   It will be fixed in the final v9.2.0 release.
> > 
> > Major Changes from Hammer
> > -------------------------
> > 
> > * *General*:
> >   * Ceph daemons are now managed via systemd (with the exception of
> >     Ubuntu Trusty, which still uses upstart).
> >   * Ceph daemons run as 'ceph' user instead root.
> >   * On Red Hat distros, there is also an SELinux policy.
> > * *RADOS*:
> >   * The RADOS cache tier can now proxy write operations to the base
> >     tier, allowing writes to be handled without forcing migration of
> >     an object into the cache.
> >   * The SHEC erasure coding support is no longer flagged as
> >     experimental. SHEC trades some additional storage space for faster
> >     repair.
> >   * There is now a unified queue (and thus prioritization) of client
> >     IO, recovery, scrubbing, and snapshot trimming.
> >   * There have been many improvements to low-level repair tooling
> >     (ceph-objectstore-tool).
> >   * The internal ObjectStore API has been significantly cleaned up in
> > order
> >     to faciliate new storage backends like NewStore.
> > * *RGW*:
> >   * The Swift API now supports object expiration.
> >   * There are many Swift API compatibility improvements.
> > * *RBD*:
> >   * The ``rbd du`` command shows actual usage (quickly, when
> >     object-map is enabled).
> >   * The object-map feature has seen many stability improvements.
> >   * Object-map and exclusive-lock features can be enabled or disabled
> >     dynamically.
> >   * You can now store user metadata and set persistent librbd options
> >     associated with individual images.
> >   * The new deep-flatten features allows flattening of a clone and all
> >     of its snapshots.  (Previously snapshots could not be flattened.)
> >   * The export-diff command command is now faster (it uses aio).  There is
> > also
> >     a new fast-diff feature.
> >   * The --size argument can be specified with a suffix for units
> >     (e.g., ``--size 64G``).
> >   * There is a new ``rbd status`` command that, for now, shows who has
> >     the image open/mapped.
> > * *CephFS*:
> >   * You can now rename snapshots.
> >   * There have been ongoing improvements around administration,
> > diagnostics,
> >     and the check and repair tools.
> >   * The caching and revocation of client cache state due to unused
> >     inodes has been dramatically improved.
> >   * The ceph-fuse client behaves better on 32-bit hosts.
> > 
> > Distro compatibility
> > --------------------
> > 
> > We have decided to drop support for many older distributions so that we
> > can move to a newer compiler toolchain (e.g., C++11).  Although it is
> > still possible to build Ceph on older distributions by installing
> > backported development tools, we are not building and publishing release
> > packages for ceph.com.
> > 
> > In particular,
> > 
> > * CentOS 7 or later; we have dropped support for CentOS 6 (and other
> >   RHEL 6 derivatives, like Scientific Linux 6).
> > * Debian Jessie 8.x or later; Debian Wheezy 7.x's g++ has incomplete
> >   support for C++11 (and no systemd).
> > * Ubuntu Trusty 14.04 or later; Ubuntu Precise 12.04 is no longer
> >   supported.
> > * Fedora 22 or later.
> > 
> > Upgrading from Firefly
> > ----------------------
> > 
> > Upgrading directly from Firefly v0.80.z is not possible.  All clusters
> > must first upgrade to Hammer v0.94.4 or a later v0.94.z release; only then
> > is it possible to upgrade to Infernalis 9.2.z.
> > 
> > Note that v0.94.4 isn't released yet, but you can upgrade to a test build
> > from gitbuilder with::
> > 
> >   ceph-deploy install --dev hammer HOST
> > 
> > The v0.94.4 Hammer point release will be out before v9.2.0 Infernalis is.
> > 
> > Upgrading from Hammer
> > ---------------------
> > 
> > * For all distributions that support systemd (CentOS 7, Fedora, Debian
> >   Jessie 8.x, OpenSUSE), ceph daemons are now managed using native systemd
> >   files instead of the legacy sysvinit scripts.  For example,::
> > 
> >     systemctl start ceph.target       # start all daemons
> >     systemctl status ceph-osd@12      # check status of osd.12
> > 
> >   The main notable distro that is *not* yet using systemd is Ubuntu trusty
> >   14.04.  (The next Ubuntu LTS, 16.04, will use systemd instead of
> > upstart.)
> > 
> > * Ceph daemons now run as user and group ``ceph`` by default.  The
> >   ceph user has a static UID assigned by Fedora and Debian (also used
> >   by derivative distributions like RHEL/CentOS and Ubuntu).  On SUSE
> >   the ceph user will currently get a dynamically assigned UID when the
> >   user is created.
> > 
> >   If your systems already have a ceph user, upgrading the package will
> > cause
> >   problems.  We suggest you first remove or rename the existing 'ceph'
> > user
> >   before upgrading.
> > 
> >   When upgrading, administrators have two options:
> > 
> >    #. Add the following line to ``ceph.conf`` on all hosts::
> > 
> >         setuser match path = /var/lib/ceph/$type/$cluster-$id
> > 
> >       This will make the Ceph daemons run as root (i.e., not drop
> >       privileges and switch to user ceph) if the daemon's data
> >       directory is still owned by root.  Newly deployed daemons will
> >       be created with data owned by user ceph and will run with
> >       reduced privileges, but upgraded daemons will continue to run as
> >       root.
> > 
> >    #. Fix the data ownership during the upgrade.  This is the preferred
> > option,
> >       but is more work.  The process for each host would be to:
> > 
> >       #. Upgrade the ceph package.  This creates the ceph user and group.
> > For
> > 	 example::
> > 
> > 	   ceph-deploy install --stable infernalis HOST
> > 
> >       #. Stop the daemon(s).::
> > 
> > 	   service ceph stop           # fedora, centos, rhel, debian
> > 	   stop ceph-all               # ubuntu
> > 
> >       #. Fix the ownership::
> > 
> > 	   chown -R ceph:ceph /var/lib/ceph
> > 
> >       #. Restart the daemon(s).::
> > 
> > 	   start ceph-all                # ubuntu
> > 	   systemctl start ceph.target   # debian, centos, fedora, rhel
> > 
> > * The on-disk format for the experimental KeyValueStore OSD backend has
> >   changed.  You will need to remove any OSDs using that backend before you
> >   upgrade any test clusters that use it.
> > 
> > Upgrade notes
> > -------------
> > 
> > * When a pool quota is reached, librados operations now block
> > indefinitely,
> >   the same way they do when the cluster fills up.  (Previously they would
> > return
> >   -ENOSPC).  By default, a full cluster or pool will now block.  If your
> >   librados application can handle ENOSPC or EDQUOT errors gracefully, you
> > can
> >   get error returns instead by using the new librados OPERATION_FULL_TRY
> > flag.
> > 
> > Notable changes
> > ---------------
> > 
> > NOTE: These notes are somewhat abbreviated while we find a less time-
> > consuming process for generating them.
> > 
> > * build: C++11 now supported
> > * build: many cmake improvements
> > * build: OSX build fixes (Yan, Zheng)
> > * build: remove rest-bench
> > * ceph-disk: many fixes (Loic Dachary)
> > * ceph-disk: support for multipath devices (Loic Dachary)
> > * ceph-fuse: mostly behave on 32-bit hosts (Yan, Zheng)
> > * ceph-objectstore-tool: many improvements (David Zafman)
> > * common: bufferlist performance tuning (Piotr Dalek, Sage Weil)
> > * common: make mutex more efficient
> > * common: some async compression infrastructure (Haomai Wang)
> > * librados: add FULL_TRY and FULL_FORCE flags for dealing with full
> > clusters or pools (Sage Weil)
> > * librados: fix notify completion race (#13114 Sage Weil)
> > * librados, libcephfs: randomize client nonces (Josh Durgin)
> > * librados: pybind: fix binary omap values (Robin H. Johnson)
> > * librbd: fix reads larger than the cache size (Lu Shi)
> > * librbd: metadata filter fixes (Haomai Wang)
> > * librbd: use write_full when possible (Zhiqiang Wang)
> > * mds: avoid emitting cap warnigns before evicting session (John Spray)
> > * mds: fix expected holes in journal objects (#13167 Yan, Zheng)
> > * mds: fix SnapServer crash on deleted pool (John Spray)
> > * mds: many fixes (Yan, Zheng, John Spray, Greg Farnum)
> > * mon: add cache over MonitorDBStore (Kefu Chai)
> > * mon: 'ceph osd metadata' can dump all osds (Haomai Wang)
> > * mon: detect kv backend failures (Sage Weil)
> > * mon: fix CRUSH map test for new pools (Sage Weil)
> > * mon: fix min_last_epoch_clean tracking (Kefu Chai)
> > * mon: misc scaling fixes (Sage Weil)
> > * mon: streamline session handling, fix memory leaks (Sage Weil)
> > * mon: upgrades must pass through hammer (Sage Weil)
> > * msg/async: many fixes (Haomai Wang)
> > * osd: cache proxy-write support (Zhiqiang Wang, Samuel Just)
> > * osd: configure promotion based on write recency (Zhiqiang Wang)
> > * osd: don't send dup MMonGetOSDMap requests (Sage Weil, Kefu Chai)
> > * osd: erasure-code: fix SHEC floating point bug (#12936 Loic Dachary)
> > * osd: erasure-code: update to ISA-L 2.14 (Yuan Zhou)
> > * osd: fix hitset object naming to use GMT (Kefu Chai)
> > * osd: fix misc memory leaks (Sage Weil)
> > * osd: fix peek_queue locking in FileStore (Xinze Chi)
> > * osd: fix promotion vs full cache tier (Samuel Just)
> > * osd: fix replay requeue when pg is still activating (#13116 Samuel Just)
> > * osd: fix scrub stat bugs (Sage Weil, Samuel Just)
> > * osd: force promotion for ops EC can't handle (Zhiqiang Wang)
> > * osd: improve behavior on machines with large memory pages (Steve Capper)
> > * osd: merge multiple setattr calls into a setattrs call (Xinxin Shu)
> > * osd: newstore prototype (Sage Weil)
> > * osd: ObjectStore internal API refactor (Sage Weil)
> > * osd: SHEC no longer experimental
> > * osd: throttle evict ops (Yunchuan Wen)
> > * osd: upgrades must pass through hammer (Sage Weil)
> > * osd: use SEEK_HOLE / SEEK_DATA for sparse copy (Xinxin Shu)
> > * rbd: rbd-replay-prep and rbd-replay improvements (Jason Dillaman)
> > * rgw: expose the number of unhealthy workers through admin socket (Guang
> > Yang)
> > * rgw: fix casing of Content-Type header (Robin H. Johnson)
> > * rgw: fix decoding of X-Object-Manifest from GET on Swift DLO (Radslow
> > Rzarzynski)
> > * rgw: fix sysvinit script
> > * rgw: fix sysvinit script w/ multiple instances (Sage Weil, Pavan
> > Rallabhandi)
> > * rgw: improve handling of already removed buckets in expirer (Radoslaw
> > Rzarzynski)
> > * rgw: log to /var/log/ceph instead of /var/log/radosgw
> > * rgw: rework X-Trans-Id header to be conform with Swift API (Radoslaw
> > Rzarzynski)
> > * rgw: s3 encoding-type for get bucket (Jeff Weber)
> > * rgw: set max buckets per user in ceph.conf (Vikhyat Umrao)
> > * rgw: support for Swift expiration API (Radoslaw Rzarzynski, Yehuda
> > Sadeh)
> > * rgw: user rm is idempotent (Orit Wasserman)
> > * selinux policy (Boris Ranto, Milan Broz)
> > * systemd: many fixes (Sage Weil, Owen Synge, Boris Ranto, Dan van der
> > Ster)
> > * systemd: run daemons as user ceph
> > 
> > Getting Ceph
> > ------------
> > 
> > * Git at git://github.com/ceph/ceph.git
> > * Tarball at http://download.ceph.com/tarballs/ceph-9.1.0.tar.gz
> > * For packages, see http://ceph.com/docs/master/install/get-packages
> > * For ceph-deploy, see http://ceph.com/docs/master/install/install-ceph-
> > deploy
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > the body of a message to majordomo@vger.kernel.org More majordomo info at
> > http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: v9.1.0 Infernalis release candidate released
  2015-10-14 17:39   ` Sage Weil
@ 2015-10-14 20:39     ` Deneau, Tom
  2015-10-14 20:59       ` Sage Weil
  0 siblings, 1 reply; 16+ messages in thread
From: Deneau, Tom @ 2015-10-14 20:39 UTC (permalink / raw)
  To: ceph-devel

Trying to bring up a cluster using the pre-built binary packages on Ubuntu Trusty:
Installed using "ceph-deploy install --dev infernalis `hostname`"

This install seemed to work but then when I later tried
   ceph-deploy --overwrite-conf mon create-initial
it failed with
[][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.myhost.asok \
mon_status
[][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory

and indeed /var/run/ceph was empty.

I wasn't sure if this was due to an existing user named ceph (I hadn't checked) but I did a userdel of ceph
and ceph-deploy uninstall and reinstall.

Now the install part is getting an error near where it tries to create the ceph user.

[][DEBUG ] Adding system user ceph....done
[][DEBUG ] Setting system user ceph properties..Processing triggers for libc-bin (2.19-0ubuntu6.6) ...
[][WARNIN] usermod: user 'ceph' does not exist

Any suggestions for recovering from this situation?

-- Tom

> -----Original Message-----
> From: Sage Weil [mailto:sage@newdream.net]
> Sent: Wednesday, October 14, 2015 12:40 PM
> To: Deneau, Tom
> Cc: ceph-devel@vger.kernel.org
> Subject: RE: v9.1.0 Infernalis release candidate released
> 
> On Wed, 14 Oct 2015, Deneau, Tom wrote:
> > I tried an rpmbuild on Fedora21 from the tarball which seemed to work
> ok.
> > But having trouble doing "ceph-deploy --overwrite-conf mon create-
> initial" with 9.1.0".
> > This is using ceph-deploy version 1.5.24.
> > Is this part of the "needs Fedora 22 or later" story?
> 
> Yeah I think so, but it's probably mostly a "tested fc22 and it worked"
> situation.  THis is probably what is failing:
> 
> https://github.com/ceph/ceph-
> deploy/blob/master/ceph_deploy/hosts/fedora/__init__.py#L21
> 
> So maybe the specfile isn't using systemd for fc21?
> 
> sage
> 
> 
> >
> > -- Tom
> >
> > [myhost][DEBUG ] create a done file to avoid re-doing the mon
> > deployment [myhost][DEBUG ] create the init path if it does not exist
> > [myhost][DEBUG ] locating the `service` executable...
> > [myhost][INFO  ] Running command: /usr/sbin/service ceph -c
> > /etc/ceph/ceph.conf start mon.myhost [myhost][WARNIN] The service
> > command supports only basic LSB actions (start, stop, restart, try-
> restart, reload, force-reload, sta\ tus). For other actions, please try to
> use systemctl.
> > [myhost][ERROR ] RuntimeError: command returned non-zero exit status:
> > 2 [ceph_deploy.mon][ERROR ] Failed to execute command:
> > /usr/sbin/service ceph -c /etc/ceph/ceph.conf start mon.myhost
> > [ceph_deploy][ERROR ] GenericError: Failed to create 1 monitors
> >
> >
> > > -----Original Message-----
> > > From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-
> > > owner@vger.kernel.org] On Behalf Of Sage Weil
> > > Sent: Tuesday, October 13, 2015 4:02 PM
> > > To: ceph-announce@ceph.com; ceph-devel@vger.kernel.org; ceph-
> > > users@ceph.com; ceph-maintainers@ceph.com
> > > Subject: v9.1.0 Infernalis release candidate released
> > >
> > > This is the first Infernalis release candidate.  There have been
> > > some major changes since hammer, and the upgrade process is non-
> trivial.
> > > Please read carefully.
> > >
> > > Getting the release candidate
> > > -----------------------------
> > >
> > > The v9.1.0 packages are pushed to the development release
> repositories::
> > >
> > >   http://download.ceph.com/rpm-testing
> > >   http://download.ceph.com/debian-testing
> > >
> > > For for info, see::
> > >
> > >   http://docs.ceph.com/docs/master/install/get-packages/
> > >
> > > Or install with ceph-deploy via::
> > >
> > >   ceph-deploy install --testing HOST
> > >
> > > Known issues
> > > ------------
> > >
> > > * librbd and librados ABI compatibility is broken.  Be careful
> > >   installing this RC on client machines (e.g., those running qemu).
> > >   It will be fixed in the final v9.2.0 release.
> > >
> > > Major Changes from Hammer
> > > -------------------------
> > >
> > > * *General*:
> > >   * Ceph daemons are now managed via systemd (with the exception of
> > >     Ubuntu Trusty, which still uses upstart).
> > >   * Ceph daemons run as 'ceph' user instead root.
> > >   * On Red Hat distros, there is also an SELinux policy.
> > > * *RADOS*:
> > >   * The RADOS cache tier can now proxy write operations to the base
> > >     tier, allowing writes to be handled without forcing migration of
> > >     an object into the cache.
> > >   * The SHEC erasure coding support is no longer flagged as
> > >     experimental. SHEC trades some additional storage space for faster
> > >     repair.
> > >   * There is now a unified queue (and thus prioritization) of client
> > >     IO, recovery, scrubbing, and snapshot trimming.
> > >   * There have been many improvements to low-level repair tooling
> > >     (ceph-objectstore-tool).
> > >   * The internal ObjectStore API has been significantly cleaned up
> > > in order
> > >     to faciliate new storage backends like NewStore.
> > > * *RGW*:
> > >   * The Swift API now supports object expiration.
> > >   * There are many Swift API compatibility improvements.
> > > * *RBD*:
> > >   * The ``rbd du`` command shows actual usage (quickly, when
> > >     object-map is enabled).
> > >   * The object-map feature has seen many stability improvements.
> > >   * Object-map and exclusive-lock features can be enabled or disabled
> > >     dynamically.
> > >   * You can now store user metadata and set persistent librbd options
> > >     associated with individual images.
> > >   * The new deep-flatten features allows flattening of a clone and all
> > >     of its snapshots.  (Previously snapshots could not be flattened.)
> > >   * The export-diff command command is now faster (it uses aio).
> > > There is also
> > >     a new fast-diff feature.
> > >   * The --size argument can be specified with a suffix for units
> > >     (e.g., ``--size 64G``).
> > >   * There is a new ``rbd status`` command that, for now, shows who has
> > >     the image open/mapped.
> > > * *CephFS*:
> > >   * You can now rename snapshots.
> > >   * There have been ongoing improvements around administration,
> > > diagnostics,
> > >     and the check and repair tools.
> > >   * The caching and revocation of client cache state due to unused
> > >     inodes has been dramatically improved.
> > >   * The ceph-fuse client behaves better on 32-bit hosts.
> > >
> > > Distro compatibility
> > > --------------------
> > >
> > > We have decided to drop support for many older distributions so that
> > > we can move to a newer compiler toolchain (e.g., C++11).  Although
> > > it is still possible to build Ceph on older distributions by
> > > installing backported development tools, we are not building and
> > > publishing release packages for ceph.com.
> > >
> > > In particular,
> > >
> > > * CentOS 7 or later; we have dropped support for CentOS 6 (and other
> > >   RHEL 6 derivatives, like Scientific Linux 6).
> > > * Debian Jessie 8.x or later; Debian Wheezy 7.x's g++ has incomplete
> > >   support for C++11 (and no systemd).
> > > * Ubuntu Trusty 14.04 or later; Ubuntu Precise 12.04 is no longer
> > >   supported.
> > > * Fedora 22 or later.
> > >
> > > Upgrading from Firefly
> > > ----------------------
> > >
> > > Upgrading directly from Firefly v0.80.z is not possible.  All
> > > clusters must first upgrade to Hammer v0.94.4 or a later v0.94.z
> > > release; only then is it possible to upgrade to Infernalis 9.2.z.
> > >
> > > Note that v0.94.4 isn't released yet, but you can upgrade to a test
> > > build from gitbuilder with::
> > >
> > >   ceph-deploy install --dev hammer HOST
> > >
> > > The v0.94.4 Hammer point release will be out before v9.2.0 Infernalis
> is.
> > >
> > > Upgrading from Hammer
> > > ---------------------
> > >
> > > * For all distributions that support systemd (CentOS 7, Fedora, Debian
> > >   Jessie 8.x, OpenSUSE), ceph daemons are now managed using native
> systemd
> > >   files instead of the legacy sysvinit scripts.  For example,::
> > >
> > >     systemctl start ceph.target       # start all daemons
> > >     systemctl status ceph-osd@12      # check status of osd.12
> > >
> > >   The main notable distro that is *not* yet using systemd is Ubuntu
> trusty
> > >   14.04.  (The next Ubuntu LTS, 16.04, will use systemd instead of
> > > upstart.)
> > >
> > > * Ceph daemons now run as user and group ``ceph`` by default.  The
> > >   ceph user has a static UID assigned by Fedora and Debian (also used
> > >   by derivative distributions like RHEL/CentOS and Ubuntu).  On SUSE
> > >   the ceph user will currently get a dynamically assigned UID when the
> > >   user is created.
> > >
> > >   If your systems already have a ceph user, upgrading the package
> > > will cause
> > >   problems.  We suggest you first remove or rename the existing 'ceph'
> > > user
> > >   before upgrading.
> > >
> > >   When upgrading, administrators have two options:
> > >
> > >    #. Add the following line to ``ceph.conf`` on all hosts::
> > >
> > >         setuser match path = /var/lib/ceph/$type/$cluster-$id
> > >
> > >       This will make the Ceph daemons run as root (i.e., not drop
> > >       privileges and switch to user ceph) if the daemon's data
> > >       directory is still owned by root.  Newly deployed daemons will
> > >       be created with data owned by user ceph and will run with
> > >       reduced privileges, but upgraded daemons will continue to run as
> > >       root.
> > >
> > >    #. Fix the data ownership during the upgrade.  This is the
> > > preferred option,
> > >       but is more work.  The process for each host would be to:
> > >
> > >       #. Upgrade the ceph package.  This creates the ceph user and
> group.
> > > For
> > > 	 example::
> > >
> > > 	   ceph-deploy install --stable infernalis HOST
> > >
> > >       #. Stop the daemon(s).::
> > >
> > > 	   service ceph stop           # fedora, centos, rhel, debian
> > > 	   stop ceph-all               # ubuntu
> > >
> > >       #. Fix the ownership::
> > >
> > > 	   chown -R ceph:ceph /var/lib/ceph
> > >
> > >       #. Restart the daemon(s).::
> > >
> > > 	   start ceph-all                # ubuntu
> > > 	   systemctl start ceph.target   # debian, centos, fedora, rhel
> > >
> > > * The on-disk format for the experimental KeyValueStore OSD backend
> has
> > >   changed.  You will need to remove any OSDs using that backend before
> you
> > >   upgrade any test clusters that use it.
> > >
> > > Upgrade notes
> > > -------------
> > >
> > > * When a pool quota is reached, librados operations now block
> > > indefinitely,
> > >   the same way they do when the cluster fills up.  (Previously they
> > > would return
> > >   -ENOSPC).  By default, a full cluster or pool will now block.  If
> your
> > >   librados application can handle ENOSPC or EDQUOT errors
> > > gracefully, you can
> > >   get error returns instead by using the new librados
> > > OPERATION_FULL_TRY flag.
> > >
> > > Notable changes
> > > ---------------
> > >
> > > NOTE: These notes are somewhat abbreviated while we find a less
> > > time- consuming process for generating them.
> > >
> > > * build: C++11 now supported
> > > * build: many cmake improvements
> > > * build: OSX build fixes (Yan, Zheng)
> > > * build: remove rest-bench
> > > * ceph-disk: many fixes (Loic Dachary)
> > > * ceph-disk: support for multipath devices (Loic Dachary)
> > > * ceph-fuse: mostly behave on 32-bit hosts (Yan, Zheng)
> > > * ceph-objectstore-tool: many improvements (David Zafman)
> > > * common: bufferlist performance tuning (Piotr Dalek, Sage Weil)
> > > * common: make mutex more efficient
> > > * common: some async compression infrastructure (Haomai Wang)
> > > * librados: add FULL_TRY and FULL_FORCE flags for dealing with full
> > > clusters or pools (Sage Weil)
> > > * librados: fix notify completion race (#13114 Sage Weil)
> > > * librados, libcephfs: randomize client nonces (Josh Durgin)
> > > * librados: pybind: fix binary omap values (Robin H. Johnson)
> > > * librbd: fix reads larger than the cache size (Lu Shi)
> > > * librbd: metadata filter fixes (Haomai Wang)
> > > * librbd: use write_full when possible (Zhiqiang Wang)
> > > * mds: avoid emitting cap warnigns before evicting session (John
> > > Spray)
> > > * mds: fix expected holes in journal objects (#13167 Yan, Zheng)
> > > * mds: fix SnapServer crash on deleted pool (John Spray)
> > > * mds: many fixes (Yan, Zheng, John Spray, Greg Farnum)
> > > * mon: add cache over MonitorDBStore (Kefu Chai)
> > > * mon: 'ceph osd metadata' can dump all osds (Haomai Wang)
> > > * mon: detect kv backend failures (Sage Weil)
> > > * mon: fix CRUSH map test for new pools (Sage Weil)
> > > * mon: fix min_last_epoch_clean tracking (Kefu Chai)
> > > * mon: misc scaling fixes (Sage Weil)
> > > * mon: streamline session handling, fix memory leaks (Sage Weil)
> > > * mon: upgrades must pass through hammer (Sage Weil)
> > > * msg/async: many fixes (Haomai Wang)
> > > * osd: cache proxy-write support (Zhiqiang Wang, Samuel Just)
> > > * osd: configure promotion based on write recency (Zhiqiang Wang)
> > > * osd: don't send dup MMonGetOSDMap requests (Sage Weil, Kefu Chai)
> > > * osd: erasure-code: fix SHEC floating point bug (#12936 Loic
> > > Dachary)
> > > * osd: erasure-code: update to ISA-L 2.14 (Yuan Zhou)
> > > * osd: fix hitset object naming to use GMT (Kefu Chai)
> > > * osd: fix misc memory leaks (Sage Weil)
> > > * osd: fix peek_queue locking in FileStore (Xinze Chi)
> > > * osd: fix promotion vs full cache tier (Samuel Just)
> > > * osd: fix replay requeue when pg is still activating (#13116 Samuel
> > > Just)
> > > * osd: fix scrub stat bugs (Sage Weil, Samuel Just)
> > > * osd: force promotion for ops EC can't handle (Zhiqiang Wang)
> > > * osd: improve behavior on machines with large memory pages (Steve
> > > Capper)
> > > * osd: merge multiple setattr calls into a setattrs call (Xinxin
> > > Shu)
> > > * osd: newstore prototype (Sage Weil)
> > > * osd: ObjectStore internal API refactor (Sage Weil)
> > > * osd: SHEC no longer experimental
> > > * osd: throttle evict ops (Yunchuan Wen)
> > > * osd: upgrades must pass through hammer (Sage Weil)
> > > * osd: use SEEK_HOLE / SEEK_DATA for sparse copy (Xinxin Shu)
> > > * rbd: rbd-replay-prep and rbd-replay improvements (Jason Dillaman)
> > > * rgw: expose the number of unhealthy workers through admin socket
> > > (Guang
> > > Yang)
> > > * rgw: fix casing of Content-Type header (Robin H. Johnson)
> > > * rgw: fix decoding of X-Object-Manifest from GET on Swift DLO
> > > (Radslow
> > > Rzarzynski)
> > > * rgw: fix sysvinit script
> > > * rgw: fix sysvinit script w/ multiple instances (Sage Weil, Pavan
> > > Rallabhandi)
> > > * rgw: improve handling of already removed buckets in expirer
> > > (Radoslaw
> > > Rzarzynski)
> > > * rgw: log to /var/log/ceph instead of /var/log/radosgw
> > > * rgw: rework X-Trans-Id header to be conform with Swift API
> > > (Radoslaw
> > > Rzarzynski)
> > > * rgw: s3 encoding-type for get bucket (Jeff Weber)
> > > * rgw: set max buckets per user in ceph.conf (Vikhyat Umrao)
> > > * rgw: support for Swift expiration API (Radoslaw Rzarzynski, Yehuda
> > > Sadeh)
> > > * rgw: user rm is idempotent (Orit Wasserman)
> > > * selinux policy (Boris Ranto, Milan Broz)
> > > * systemd: many fixes (Sage Weil, Owen Synge, Boris Ranto, Dan van
> > > der
> > > Ster)
> > > * systemd: run daemons as user ceph
> > >
> > > Getting Ceph
> > > ------------
> > >
> > > * Git at git://github.com/ceph/ceph.git
> > > * Tarball at http://download.ceph.com/tarballs/ceph-9.1.0.tar.gz
> > > * For packages, see http://ceph.com/docs/master/install/get-packages
> > > * For ceph-deploy, see
> > > http://ceph.com/docs/master/install/install-ceph-
> > > deploy
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe
> > > ceph-devel" in the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
> >

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: v9.1.0 Infernalis release candidate released
  2015-10-14 20:39     ` Deneau, Tom
@ 2015-10-14 20:59       ` Sage Weil
  2015-10-14 21:23         ` Deneau, Tom
  0 siblings, 1 reply; 16+ messages in thread
From: Sage Weil @ 2015-10-14 20:59 UTC (permalink / raw)
  To: Deneau, Tom; +Cc: ceph-devel

On Wed, 14 Oct 2015, Deneau, Tom wrote:
> Trying to bring up a cluster using the pre-built binary packages on Ubuntu Trusty:
> Installed using "ceph-deploy install --dev infernalis `hostname`"
> 
> This install seemed to work but then when I later tried
>    ceph-deploy --overwrite-conf mon create-initial
> it failed with
> [][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.myhost.asok \
> mon_status
> [][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
> 
> and indeed /var/run/ceph was empty.
> 
> I wasn't sure if this was due to an existing user named ceph (I hadn't checked) but I did a userdel of ceph
> and ceph-deploy uninstall and reinstall.
> 
> Now the install part is getting an error near where it tries to create the ceph user.
> 
> [][DEBUG ] Adding system user ceph....done
> [][DEBUG ] Setting system user ceph properties..Processing triggers for libc-bin (2.19-0ubuntu6.6) ...
> [][WARNIN] usermod: user 'ceph' does not exist
> 
> Any suggestions for recovering from this situation?

I'm guessing this is.. trusty?  Did you remove the package, then veirfy 
the user is deleted, then (re)install?  You may need to do a dpkg purge 
(not just uninstall/remove) to make it forget its state...

I'm re-running the ceph-deploy test suite (centos7, trusty) to make sure 
nothing is awry...

	http://pulpito.ceph.com/sage-2015-10-14_13:55:41-ceph-deploy-infernalis---basic-vps/

sage


> -- Tom
> 
> > -----Original Message-----
> > From: Sage Weil [mailto:sage@newdream.net]
> > Sent: Wednesday, October 14, 2015 12:40 PM
> > To: Deneau, Tom
> > Cc: ceph-devel@vger.kernel.org
> > Subject: RE: v9.1.0 Infernalis release candidate released
> > 
> > On Wed, 14 Oct 2015, Deneau, Tom wrote:
> > > I tried an rpmbuild on Fedora21 from the tarball which seemed to work
> > ok.
> > > But having trouble doing "ceph-deploy --overwrite-conf mon create-
> > initial" with 9.1.0".
> > > This is using ceph-deploy version 1.5.24.
> > > Is this part of the "needs Fedora 22 or later" story?
> > 
> > Yeah I think so, but it's probably mostly a "tested fc22 and it worked"
> > situation.  THis is probably what is failing:
> > 
> > https://github.com/ceph/ceph-
> > deploy/blob/master/ceph_deploy/hosts/fedora/__init__.py#L21
> > 
> > So maybe the specfile isn't using systemd for fc21?
> > 
> > sage
> > 
> > 
> > >
> > > -- Tom
> > >
> > > [myhost][DEBUG ] create a done file to avoid re-doing the mon
> > > deployment [myhost][DEBUG ] create the init path if it does not exist
> > > [myhost][DEBUG ] locating the `service` executable...
> > > [myhost][INFO  ] Running command: /usr/sbin/service ceph -c
> > > /etc/ceph/ceph.conf start mon.myhost [myhost][WARNIN] The service
> > > command supports only basic LSB actions (start, stop, restart, try-
> > restart, reload, force-reload, sta\ tus). For other actions, please try to
> > use systemctl.
> > > [myhost][ERROR ] RuntimeError: command returned non-zero exit status:
> > > 2 [ceph_deploy.mon][ERROR ] Failed to execute command:
> > > /usr/sbin/service ceph -c /etc/ceph/ceph.conf start mon.myhost
> > > [ceph_deploy][ERROR ] GenericError: Failed to create 1 monitors
> > >
> > >
> > > > -----Original Message-----
> > > > From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-
> > > > owner@vger.kernel.org] On Behalf Of Sage Weil
> > > > Sent: Tuesday, October 13, 2015 4:02 PM
> > > > To: ceph-announce@ceph.com; ceph-devel@vger.kernel.org; ceph-
> > > > users@ceph.com; ceph-maintainers@ceph.com
> > > > Subject: v9.1.0 Infernalis release candidate released
> > > >
> > > > This is the first Infernalis release candidate.  There have been
> > > > some major changes since hammer, and the upgrade process is non-
> > trivial.
> > > > Please read carefully.
> > > >
> > > > Getting the release candidate
> > > > -----------------------------
> > > >
> > > > The v9.1.0 packages are pushed to the development release
> > repositories::
> > > >
> > > >   http://download.ceph.com/rpm-testing
> > > >   http://download.ceph.com/debian-testing
> > > >
> > > > For for info, see::
> > > >
> > > >   http://docs.ceph.com/docs/master/install/get-packages/
> > > >
> > > > Or install with ceph-deploy via::
> > > >
> > > >   ceph-deploy install --testing HOST
> > > >
> > > > Known issues
> > > > ------------
> > > >
> > > > * librbd and librados ABI compatibility is broken.  Be careful
> > > >   installing this RC on client machines (e.g., those running qemu).
> > > >   It will be fixed in the final v9.2.0 release.
> > > >
> > > > Major Changes from Hammer
> > > > -------------------------
> > > >
> > > > * *General*:
> > > >   * Ceph daemons are now managed via systemd (with the exception of
> > > >     Ubuntu Trusty, which still uses upstart).
> > > >   * Ceph daemons run as 'ceph' user instead root.
> > > >   * On Red Hat distros, there is also an SELinux policy.
> > > > * *RADOS*:
> > > >   * The RADOS cache tier can now proxy write operations to the base
> > > >     tier, allowing writes to be handled without forcing migration of
> > > >     an object into the cache.
> > > >   * The SHEC erasure coding support is no longer flagged as
> > > >     experimental. SHEC trades some additional storage space for faster
> > > >     repair.
> > > >   * There is now a unified queue (and thus prioritization) of client
> > > >     IO, recovery, scrubbing, and snapshot trimming.
> > > >   * There have been many improvements to low-level repair tooling
> > > >     (ceph-objectstore-tool).
> > > >   * The internal ObjectStore API has been significantly cleaned up
> > > > in order
> > > >     to faciliate new storage backends like NewStore.
> > > > * *RGW*:
> > > >   * The Swift API now supports object expiration.
> > > >   * There are many Swift API compatibility improvements.
> > > > * *RBD*:
> > > >   * The ``rbd du`` command shows actual usage (quickly, when
> > > >     object-map is enabled).
> > > >   * The object-map feature has seen many stability improvements.
> > > >   * Object-map and exclusive-lock features can be enabled or disabled
> > > >     dynamically.
> > > >   * You can now store user metadata and set persistent librbd options
> > > >     associated with individual images.
> > > >   * The new deep-flatten features allows flattening of a clone and all
> > > >     of its snapshots.  (Previously snapshots could not be flattened.)
> > > >   * The export-diff command command is now faster (it uses aio).
> > > > There is also
> > > >     a new fast-diff feature.
> > > >   * The --size argument can be specified with a suffix for units
> > > >     (e.g., ``--size 64G``).
> > > >   * There is a new ``rbd status`` command that, for now, shows who has
> > > >     the image open/mapped.
> > > > * *CephFS*:
> > > >   * You can now rename snapshots.
> > > >   * There have been ongoing improvements around administration,
> > > > diagnostics,
> > > >     and the check and repair tools.
> > > >   * The caching and revocation of client cache state due to unused
> > > >     inodes has been dramatically improved.
> > > >   * The ceph-fuse client behaves better on 32-bit hosts.
> > > >
> > > > Distro compatibility
> > > > --------------------
> > > >
> > > > We have decided to drop support for many older distributions so that
> > > > we can move to a newer compiler toolchain (e.g., C++11).  Although
> > > > it is still possible to build Ceph on older distributions by
> > > > installing backported development tools, we are not building and
> > > > publishing release packages for ceph.com.
> > > >
> > > > In particular,
> > > >
> > > > * CentOS 7 or later; we have dropped support for CentOS 6 (and other
> > > >   RHEL 6 derivatives, like Scientific Linux 6).
> > > > * Debian Jessie 8.x or later; Debian Wheezy 7.x's g++ has incomplete
> > > >   support for C++11 (and no systemd).
> > > > * Ubuntu Trusty 14.04 or later; Ubuntu Precise 12.04 is no longer
> > > >   supported.
> > > > * Fedora 22 or later.
> > > >
> > > > Upgrading from Firefly
> > > > ----------------------
> > > >
> > > > Upgrading directly from Firefly v0.80.z is not possible.  All
> > > > clusters must first upgrade to Hammer v0.94.4 or a later v0.94.z
> > > > release; only then is it possible to upgrade to Infernalis 9.2.z.
> > > >
> > > > Note that v0.94.4 isn't released yet, but you can upgrade to a test
> > > > build from gitbuilder with::
> > > >
> > > >   ceph-deploy install --dev hammer HOST
> > > >
> > > > The v0.94.4 Hammer point release will be out before v9.2.0 Infernalis
> > is.
> > > >
> > > > Upgrading from Hammer
> > > > ---------------------
> > > >
> > > > * For all distributions that support systemd (CentOS 7, Fedora, Debian
> > > >   Jessie 8.x, OpenSUSE), ceph daemons are now managed using native
> > systemd
> > > >   files instead of the legacy sysvinit scripts.  For example,::
> > > >
> > > >     systemctl start ceph.target       # start all daemons
> > > >     systemctl status ceph-osd@12      # check status of osd.12
> > > >
> > > >   The main notable distro that is *not* yet using systemd is Ubuntu
> > trusty
> > > >   14.04.  (The next Ubuntu LTS, 16.04, will use systemd instead of
> > > > upstart.)
> > > >
> > > > * Ceph daemons now run as user and group ``ceph`` by default.  The
> > > >   ceph user has a static UID assigned by Fedora and Debian (also used
> > > >   by derivative distributions like RHEL/CentOS and Ubuntu).  On SUSE
> > > >   the ceph user will currently get a dynamically assigned UID when the
> > > >   user is created.
> > > >
> > > >   If your systems already have a ceph user, upgrading the package
> > > > will cause
> > > >   problems.  We suggest you first remove or rename the existing 'ceph'
> > > > user
> > > >   before upgrading.
> > > >
> > > >   When upgrading, administrators have two options:
> > > >
> > > >    #. Add the following line to ``ceph.conf`` on all hosts::
> > > >
> > > >         setuser match path = /var/lib/ceph/$type/$cluster-$id
> > > >
> > > >       This will make the Ceph daemons run as root (i.e., not drop
> > > >       privileges and switch to user ceph) if the daemon's data
> > > >       directory is still owned by root.  Newly deployed daemons will
> > > >       be created with data owned by user ceph and will run with
> > > >       reduced privileges, but upgraded daemons will continue to run as
> > > >       root.
> > > >
> > > >    #. Fix the data ownership during the upgrade.  This is the
> > > > preferred option,
> > > >       but is more work.  The process for each host would be to:
> > > >
> > > >       #. Upgrade the ceph package.  This creates the ceph user and
> > group.
> > > > For
> > > > 	 example::
> > > >
> > > > 	   ceph-deploy install --stable infernalis HOST
> > > >
> > > >       #. Stop the daemon(s).::
> > > >
> > > > 	   service ceph stop           # fedora, centos, rhel, debian
> > > > 	   stop ceph-all               # ubuntu
> > > >
> > > >       #. Fix the ownership::
> > > >
> > > > 	   chown -R ceph:ceph /var/lib/ceph
> > > >
> > > >       #. Restart the daemon(s).::
> > > >
> > > > 	   start ceph-all                # ubuntu
> > > > 	   systemctl start ceph.target   # debian, centos, fedora, rhel
> > > >
> > > > * The on-disk format for the experimental KeyValueStore OSD backend
> > has
> > > >   changed.  You will need to remove any OSDs using that backend before
> > you
> > > >   upgrade any test clusters that use it.
> > > >
> > > > Upgrade notes
> > > > -------------
> > > >
> > > > * When a pool quota is reached, librados operations now block
> > > > indefinitely,
> > > >   the same way they do when the cluster fills up.  (Previously they
> > > > would return
> > > >   -ENOSPC).  By default, a full cluster or pool will now block.  If
> > your
> > > >   librados application can handle ENOSPC or EDQUOT errors
> > > > gracefully, you can
> > > >   get error returns instead by using the new librados
> > > > OPERATION_FULL_TRY flag.
> > > >
> > > > Notable changes
> > > > ---------------
> > > >
> > > > NOTE: These notes are somewhat abbreviated while we find a less
> > > > time- consuming process for generating them.
> > > >
> > > > * build: C++11 now supported
> > > > * build: many cmake improvements
> > > > * build: OSX build fixes (Yan, Zheng)
> > > > * build: remove rest-bench
> > > > * ceph-disk: many fixes (Loic Dachary)
> > > > * ceph-disk: support for multipath devices (Loic Dachary)
> > > > * ceph-fuse: mostly behave on 32-bit hosts (Yan, Zheng)
> > > > * ceph-objectstore-tool: many improvements (David Zafman)
> > > > * common: bufferlist performance tuning (Piotr Dalek, Sage Weil)
> > > > * common: make mutex more efficient
> > > > * common: some async compression infrastructure (Haomai Wang)
> > > > * librados: add FULL_TRY and FULL_FORCE flags for dealing with full
> > > > clusters or pools (Sage Weil)
> > > > * librados: fix notify completion race (#13114 Sage Weil)
> > > > * librados, libcephfs: randomize client nonces (Josh Durgin)
> > > > * librados: pybind: fix binary omap values (Robin H. Johnson)
> > > > * librbd: fix reads larger than the cache size (Lu Shi)
> > > > * librbd: metadata filter fixes (Haomai Wang)
> > > > * librbd: use write_full when possible (Zhiqiang Wang)
> > > > * mds: avoid emitting cap warnigns before evicting session (John
> > > > Spray)
> > > > * mds: fix expected holes in journal objects (#13167 Yan, Zheng)
> > > > * mds: fix SnapServer crash on deleted pool (John Spray)
> > > > * mds: many fixes (Yan, Zheng, John Spray, Greg Farnum)
> > > > * mon: add cache over MonitorDBStore (Kefu Chai)
> > > > * mon: 'ceph osd metadata' can dump all osds (Haomai Wang)
> > > > * mon: detect kv backend failures (Sage Weil)
> > > > * mon: fix CRUSH map test for new pools (Sage Weil)
> > > > * mon: fix min_last_epoch_clean tracking (Kefu Chai)
> > > > * mon: misc scaling fixes (Sage Weil)
> > > > * mon: streamline session handling, fix memory leaks (Sage Weil)
> > > > * mon: upgrades must pass through hammer (Sage Weil)
> > > > * msg/async: many fixes (Haomai Wang)
> > > > * osd: cache proxy-write support (Zhiqiang Wang, Samuel Just)
> > > > * osd: configure promotion based on write recency (Zhiqiang Wang)
> > > > * osd: don't send dup MMonGetOSDMap requests (Sage Weil, Kefu Chai)
> > > > * osd: erasure-code: fix SHEC floating point bug (#12936 Loic
> > > > Dachary)
> > > > * osd: erasure-code: update to ISA-L 2.14 (Yuan Zhou)
> > > > * osd: fix hitset object naming to use GMT (Kefu Chai)
> > > > * osd: fix misc memory leaks (Sage Weil)
> > > > * osd: fix peek_queue locking in FileStore (Xinze Chi)
> > > > * osd: fix promotion vs full cache tier (Samuel Just)
> > > > * osd: fix replay requeue when pg is still activating (#13116 Samuel
> > > > Just)
> > > > * osd: fix scrub stat bugs (Sage Weil, Samuel Just)
> > > > * osd: force promotion for ops EC can't handle (Zhiqiang Wang)
> > > > * osd: improve behavior on machines with large memory pages (Steve
> > > > Capper)
> > > > * osd: merge multiple setattr calls into a setattrs call (Xinxin
> > > > Shu)
> > > > * osd: newstore prototype (Sage Weil)
> > > > * osd: ObjectStore internal API refactor (Sage Weil)
> > > > * osd: SHEC no longer experimental
> > > > * osd: throttle evict ops (Yunchuan Wen)
> > > > * osd: upgrades must pass through hammer (Sage Weil)
> > > > * osd: use SEEK_HOLE / SEEK_DATA for sparse copy (Xinxin Shu)
> > > > * rbd: rbd-replay-prep and rbd-replay improvements (Jason Dillaman)
> > > > * rgw: expose the number of unhealthy workers through admin socket
> > > > (Guang
> > > > Yang)
> > > > * rgw: fix casing of Content-Type header (Robin H. Johnson)
> > > > * rgw: fix decoding of X-Object-Manifest from GET on Swift DLO
> > > > (Radslow
> > > > Rzarzynski)
> > > > * rgw: fix sysvinit script
> > > > * rgw: fix sysvinit script w/ multiple instances (Sage Weil, Pavan
> > > > Rallabhandi)
> > > > * rgw: improve handling of already removed buckets in expirer
> > > > (Radoslaw
> > > > Rzarzynski)
> > > > * rgw: log to /var/log/ceph instead of /var/log/radosgw
> > > > * rgw: rework X-Trans-Id header to be conform with Swift API
> > > > (Radoslaw
> > > > Rzarzynski)
> > > > * rgw: s3 encoding-type for get bucket (Jeff Weber)
> > > > * rgw: set max buckets per user in ceph.conf (Vikhyat Umrao)
> > > > * rgw: support for Swift expiration API (Radoslaw Rzarzynski, Yehuda
> > > > Sadeh)
> > > > * rgw: user rm is idempotent (Orit Wasserman)
> > > > * selinux policy (Boris Ranto, Milan Broz)
> > > > * systemd: many fixes (Sage Weil, Owen Synge, Boris Ranto, Dan van
> > > > der
> > > > Ster)
> > > > * systemd: run daemons as user ceph
> > > >
> > > > Getting Ceph
> > > > ------------
> > > >
> > > > * Git at git://github.com/ceph/ceph.git
> > > > * Tarball at http://download.ceph.com/tarballs/ceph-9.1.0.tar.gz
> > > > * For packages, see http://ceph.com/docs/master/install/get-packages
> > > > * For ceph-deploy, see
> > > > http://ceph.com/docs/master/install/install-ceph-
> > > > deploy
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe
> > > > ceph-devel" in the body of a message to majordomo@vger.kernel.org
> > > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > >
> > >
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: v9.1.0 Infernalis release candidate released
  2015-10-14 20:59       ` Sage Weil
@ 2015-10-14 21:23         ` Deneau, Tom
  2015-10-14 21:29           ` Sage Weil
  0 siblings, 1 reply; 16+ messages in thread
From: Deneau, Tom @ 2015-10-14 21:23 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel



> -----Original Message-----
> From: Sage Weil [mailto:sage@newdream.net]
> Sent: Wednesday, October 14, 2015 3:59 PM
> To: Deneau, Tom
> Cc: ceph-devel@vger.kernel.org
> Subject: RE: v9.1.0 Infernalis release candidate released
> 
> On Wed, 14 Oct 2015, Deneau, Tom wrote:
> > Trying to bring up a cluster using the pre-built binary packages on
> Ubuntu Trusty:
> > Installed using "ceph-deploy install --dev infernalis `hostname`"
> >
> > This install seemed to work but then when I later tried
> >    ceph-deploy --overwrite-conf mon create-initial it failed with
> > [][INFO  ] Running command: ceph --cluster=ceph --admin-daemon
> > /var/run/ceph/ceph-mon.myhost.asok \ mon_status [][ERROR ]
> > admin_socket: exception getting command descriptions: [Errno 2] No
> > such file or directory
> >
> > and indeed /var/run/ceph was empty.
> >
> > I wasn't sure if this was due to an existing user named ceph (I hadn't
> > checked) but I did a userdel of ceph and ceph-deploy uninstall and
> reinstall.
> >
> > Now the install part is getting an error near where it tries to create
> the ceph user.
> >
> > [][DEBUG ] Adding system user ceph....done [][DEBUG ] Setting system
> > user ceph properties..Processing triggers for libc-bin (2.19-0ubuntu6.6)
> ...
> > [][WARNIN] usermod: user 'ceph' does not exist
> >
> > Any suggestions for recovering from this situation?
> 
> I'm guessing this is.. trusty?  Did you remove the package, then veirfy
> the user is deleted, then (re)install?  You may need to do a dpkg purge
> (not just uninstall/remove) to make it forget its state...
> 
> I'm re-running the ceph-deploy test suite (centos7, trusty) to make sure
> nothing is awry...
> 
> 	http://pulpito.ceph.com/sage-2015-10-14_13:55:41-ceph-deploy-
> infernalis---basic-vps/
> 
> sage
> 

Yes, did the steps above including purge.
Could I just manually create the ceph user to get around this?

-- Tom

> 
> > -- Tom
> >
> > > -----Original Message-----
> > > From: Sage Weil [mailto:sage@newdream.net]
> > > Sent: Wednesday, October 14, 2015 12:40 PM
> > > To: Deneau, Tom
> > > Cc: ceph-devel@vger.kernel.org
> > > Subject: RE: v9.1.0 Infernalis release candidate released
> > >
> > > On Wed, 14 Oct 2015, Deneau, Tom wrote:
> > > > I tried an rpmbuild on Fedora21 from the tarball which seemed to
> > > > work
> > > ok.
> > > > But having trouble doing "ceph-deploy --overwrite-conf mon create-
> > > initial" with 9.1.0".
> > > > This is using ceph-deploy version 1.5.24.
> > > > Is this part of the "needs Fedora 22 or later" story?
> > >
> > > Yeah I think so, but it's probably mostly a "tested fc22 and it
> worked"
> > > situation.  THis is probably what is failing:
> > >
> > > https://github.com/ceph/ceph-
> > > deploy/blob/master/ceph_deploy/hosts/fedora/__init__.py#L21
> > >
> > > So maybe the specfile isn't using systemd for fc21?
> > >
> > > sage
> > >
> > >
> > > >
> > > > -- Tom
> > > >
> > > > [myhost][DEBUG ] create a done file to avoid re-doing the mon
> > > > deployment [myhost][DEBUG ] create the init path if it does not
> > > > exist [myhost][DEBUG ] locating the `service` executable...
> > > > [myhost][INFO  ] Running command: /usr/sbin/service ceph -c
> > > > /etc/ceph/ceph.conf start mon.myhost [myhost][WARNIN] The service
> > > > command supports only basic LSB actions (start, stop, restart,
> > > > try-
> > > restart, reload, force-reload, sta\ tus). For other actions, please
> > > try to use systemctl.
> > > > [myhost][ERROR ] RuntimeError: command returned non-zero exit
> status:
> > > > 2 [ceph_deploy.mon][ERROR ] Failed to execute command:
> > > > /usr/sbin/service ceph -c /etc/ceph/ceph.conf start mon.myhost
> > > > [ceph_deploy][ERROR ] GenericError: Failed to create 1 monitors
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-
> > > > > owner@vger.kernel.org] On Behalf Of Sage Weil
> > > > > Sent: Tuesday, October 13, 2015 4:02 PM
> > > > > To: ceph-announce@ceph.com; ceph-devel@vger.kernel.org; ceph-
> > > > > users@ceph.com; ceph-maintainers@ceph.com
> > > > > Subject: v9.1.0 Infernalis release candidate released
> > > > >
> > > > > This is the first Infernalis release candidate.  There have been
> > > > > some major changes since hammer, and the upgrade process is non-
> > > trivial.
> > > > > Please read carefully.
> > > > >
> > > > > Getting the release candidate
> > > > > -----------------------------
> > > > >
> > > > > The v9.1.0 packages are pushed to the development release
> > > repositories::
> > > > >
> > > > >   http://download.ceph.com/rpm-testing
> > > > >   http://download.ceph.com/debian-testing
> > > > >
> > > > > For for info, see::
> > > > >
> > > > >   http://docs.ceph.com/docs/master/install/get-packages/
> > > > >
> > > > > Or install with ceph-deploy via::
> > > > >
> > > > >   ceph-deploy install --testing HOST
> > > > >
> > > > > Known issues
> > > > > ------------
> > > > >
> > > > > * librbd and librados ABI compatibility is broken.  Be careful
> > > > >   installing this RC on client machines (e.g., those running
> qemu).
> > > > >   It will be fixed in the final v9.2.0 release.
> > > > >
> > > > > Major Changes from Hammer
> > > > > -------------------------
> > > > >
> > > > > * *General*:
> > > > >   * Ceph daemons are now managed via systemd (with the exception
> of
> > > > >     Ubuntu Trusty, which still uses upstart).
> > > > >   * Ceph daemons run as 'ceph' user instead root.
> > > > >   * On Red Hat distros, there is also an SELinux policy.
> > > > > * *RADOS*:
> > > > >   * The RADOS cache tier can now proxy write operations to the
> base
> > > > >     tier, allowing writes to be handled without forcing migration
> of
> > > > >     an object into the cache.
> > > > >   * The SHEC erasure coding support is no longer flagged as
> > > > >     experimental. SHEC trades some additional storage space for
> faster
> > > > >     repair.
> > > > >   * There is now a unified queue (and thus prioritization) of
> client
> > > > >     IO, recovery, scrubbing, and snapshot trimming.
> > > > >   * There have been many improvements to low-level repair tooling
> > > > >     (ceph-objectstore-tool).
> > > > >   * The internal ObjectStore API has been significantly cleaned
> > > > > up in order
> > > > >     to faciliate new storage backends like NewStore.
> > > > > * *RGW*:
> > > > >   * The Swift API now supports object expiration.
> > > > >   * There are many Swift API compatibility improvements.
> > > > > * *RBD*:
> > > > >   * The ``rbd du`` command shows actual usage (quickly, when
> > > > >     object-map is enabled).
> > > > >   * The object-map feature has seen many stability improvements.
> > > > >   * Object-map and exclusive-lock features can be enabled or
> disabled
> > > > >     dynamically.
> > > > >   * You can now store user metadata and set persistent librbd
> options
> > > > >     associated with individual images.
> > > > >   * The new deep-flatten features allows flattening of a clone and
> all
> > > > >     of its snapshots.  (Previously snapshots could not be
> flattened.)
> > > > >   * The export-diff command command is now faster (it uses aio).
> > > > > There is also
> > > > >     a new fast-diff feature.
> > > > >   * The --size argument can be specified with a suffix for units
> > > > >     (e.g., ``--size 64G``).
> > > > >   * There is a new ``rbd status`` command that, for now, shows who
> has
> > > > >     the image open/mapped.
> > > > > * *CephFS*:
> > > > >   * You can now rename snapshots.
> > > > >   * There have been ongoing improvements around administration,
> > > > > diagnostics,
> > > > >     and the check and repair tools.
> > > > >   * The caching and revocation of client cache state due to unused
> > > > >     inodes has been dramatically improved.
> > > > >   * The ceph-fuse client behaves better on 32-bit hosts.
> > > > >
> > > > > Distro compatibility
> > > > > --------------------
> > > > >
> > > > > We have decided to drop support for many older distributions so
> > > > > that we can move to a newer compiler toolchain (e.g., C++11).
> > > > > Although it is still possible to build Ceph on older
> > > > > distributions by installing backported development tools, we are
> > > > > not building and publishing release packages for ceph.com.
> > > > >
> > > > > In particular,
> > > > >
> > > > > * CentOS 7 or later; we have dropped support for CentOS 6 (and
> other
> > > > >   RHEL 6 derivatives, like Scientific Linux 6).
> > > > > * Debian Jessie 8.x or later; Debian Wheezy 7.x's g++ has
> incomplete
> > > > >   support for C++11 (and no systemd).
> > > > > * Ubuntu Trusty 14.04 or later; Ubuntu Precise 12.04 is no longer
> > > > >   supported.
> > > > > * Fedora 22 or later.
> > > > >
> > > > > Upgrading from Firefly
> > > > > ----------------------
> > > > >
> > > > > Upgrading directly from Firefly v0.80.z is not possible.  All
> > > > > clusters must first upgrade to Hammer v0.94.4 or a later v0.94.z
> > > > > release; only then is it possible to upgrade to Infernalis 9.2.z.
> > > > >
> > > > > Note that v0.94.4 isn't released yet, but you can upgrade to a
> > > > > test build from gitbuilder with::
> > > > >
> > > > >   ceph-deploy install --dev hammer HOST
> > > > >
> > > > > The v0.94.4 Hammer point release will be out before v9.2.0
> > > > > Infernalis
> > > is.
> > > > >
> > > > > Upgrading from Hammer
> > > > > ---------------------
> > > > >
> > > > > * For all distributions that support systemd (CentOS 7, Fedora,
> Debian
> > > > >   Jessie 8.x, OpenSUSE), ceph daemons are now managed using
> > > > > native
> > > systemd
> > > > >   files instead of the legacy sysvinit scripts.  For example,::
> > > > >
> > > > >     systemctl start ceph.target       # start all daemons
> > > > >     systemctl status ceph-osd@12      # check status of osd.12
> > > > >
> > > > >   The main notable distro that is *not* yet using systemd is
> > > > > Ubuntu
> > > trusty
> > > > >   14.04.  (The next Ubuntu LTS, 16.04, will use systemd instead
> > > > > of
> > > > > upstart.)
> > > > >
> > > > > * Ceph daemons now run as user and group ``ceph`` by default.  The
> > > > >   ceph user has a static UID assigned by Fedora and Debian (also
> used
> > > > >   by derivative distributions like RHEL/CentOS and Ubuntu).  On
> SUSE
> > > > >   the ceph user will currently get a dynamically assigned UID when
> the
> > > > >   user is created.
> > > > >
> > > > >   If your systems already have a ceph user, upgrading the
> > > > > package will cause
> > > > >   problems.  We suggest you first remove or rename the existing
> 'ceph'
> > > > > user
> > > > >   before upgrading.
> > > > >
> > > > >   When upgrading, administrators have two options:
> > > > >
> > > > >    #. Add the following line to ``ceph.conf`` on all hosts::
> > > > >
> > > > >         setuser match path = /var/lib/ceph/$type/$cluster-$id
> > > > >
> > > > >       This will make the Ceph daemons run as root (i.e., not drop
> > > > >       privileges and switch to user ceph) if the daemon's data
> > > > >       directory is still owned by root.  Newly deployed daemons
> will
> > > > >       be created with data owned by user ceph and will run with
> > > > >       reduced privileges, but upgraded daemons will continue to
> run as
> > > > >       root.
> > > > >
> > > > >    #. Fix the data ownership during the upgrade.  This is the
> > > > > preferred option,
> > > > >       but is more work.  The process for each host would be to:
> > > > >
> > > > >       #. Upgrade the ceph package.  This creates the ceph user
> > > > > and
> > > group.
> > > > > For
> > > > > 	 example::
> > > > >
> > > > > 	   ceph-deploy install --stable infernalis HOST
> > > > >
> > > > >       #. Stop the daemon(s).::
> > > > >
> > > > > 	   service ceph stop           # fedora, centos, rhel, debian
> > > > > 	   stop ceph-all               # ubuntu
> > > > >
> > > > >       #. Fix the ownership::
> > > > >
> > > > > 	   chown -R ceph:ceph /var/lib/ceph
> > > > >
> > > > >       #. Restart the daemon(s).::
> > > > >
> > > > > 	   start ceph-all                # ubuntu
> > > > > 	   systemctl start ceph.target   # debian, centos, fedora,
> rhel
> > > > >
> > > > > * The on-disk format for the experimental KeyValueStore OSD
> > > > > backend
> > > has
> > > > >   changed.  You will need to remove any OSDs using that backend
> > > > > before
> > > you
> > > > >   upgrade any test clusters that use it.
> > > > >
> > > > > Upgrade notes
> > > > > -------------
> > > > >
> > > > > * When a pool quota is reached, librados operations now block
> > > > > indefinitely,
> > > > >   the same way they do when the cluster fills up.  (Previously
> > > > > they would return
> > > > >   -ENOSPC).  By default, a full cluster or pool will now block.
> > > > > If
> > > your
> > > > >   librados application can handle ENOSPC or EDQUOT errors
> > > > > gracefully, you can
> > > > >   get error returns instead by using the new librados
> > > > > OPERATION_FULL_TRY flag.
> > > > >
> > > > > Notable changes
> > > > > ---------------
> > > > >
> > > > > NOTE: These notes are somewhat abbreviated while we find a less
> > > > > time- consuming process for generating them.
> > > > >
> > > > > * build: C++11 now supported
> > > > > * build: many cmake improvements
> > > > > * build: OSX build fixes (Yan, Zheng)
> > > > > * build: remove rest-bench
> > > > > * ceph-disk: many fixes (Loic Dachary)
> > > > > * ceph-disk: support for multipath devices (Loic Dachary)
> > > > > * ceph-fuse: mostly behave on 32-bit hosts (Yan, Zheng)
> > > > > * ceph-objectstore-tool: many improvements (David Zafman)
> > > > > * common: bufferlist performance tuning (Piotr Dalek, Sage Weil)
> > > > > * common: make mutex more efficient
> > > > > * common: some async compression infrastructure (Haomai Wang)
> > > > > * librados: add FULL_TRY and FULL_FORCE flags for dealing with
> > > > > full clusters or pools (Sage Weil)
> > > > > * librados: fix notify completion race (#13114 Sage Weil)
> > > > > * librados, libcephfs: randomize client nonces (Josh Durgin)
> > > > > * librados: pybind: fix binary omap values (Robin H. Johnson)
> > > > > * librbd: fix reads larger than the cache size (Lu Shi)
> > > > > * librbd: metadata filter fixes (Haomai Wang)
> > > > > * librbd: use write_full when possible (Zhiqiang Wang)
> > > > > * mds: avoid emitting cap warnigns before evicting session (John
> > > > > Spray)
> > > > > * mds: fix expected holes in journal objects (#13167 Yan, Zheng)
> > > > > * mds: fix SnapServer crash on deleted pool (John Spray)
> > > > > * mds: many fixes (Yan, Zheng, John Spray, Greg Farnum)
> > > > > * mon: add cache over MonitorDBStore (Kefu Chai)
> > > > > * mon: 'ceph osd metadata' can dump all osds (Haomai Wang)
> > > > > * mon: detect kv backend failures (Sage Weil)
> > > > > * mon: fix CRUSH map test for new pools (Sage Weil)
> > > > > * mon: fix min_last_epoch_clean tracking (Kefu Chai)
> > > > > * mon: misc scaling fixes (Sage Weil)
> > > > > * mon: streamline session handling, fix memory leaks (Sage Weil)
> > > > > * mon: upgrades must pass through hammer (Sage Weil)
> > > > > * msg/async: many fixes (Haomai Wang)
> > > > > * osd: cache proxy-write support (Zhiqiang Wang, Samuel Just)
> > > > > * osd: configure promotion based on write recency (Zhiqiang
> > > > > Wang)
> > > > > * osd: don't send dup MMonGetOSDMap requests (Sage Weil, Kefu
> > > > > Chai)
> > > > > * osd: erasure-code: fix SHEC floating point bug (#12936 Loic
> > > > > Dachary)
> > > > > * osd: erasure-code: update to ISA-L 2.14 (Yuan Zhou)
> > > > > * osd: fix hitset object naming to use GMT (Kefu Chai)
> > > > > * osd: fix misc memory leaks (Sage Weil)
> > > > > * osd: fix peek_queue locking in FileStore (Xinze Chi)
> > > > > * osd: fix promotion vs full cache tier (Samuel Just)
> > > > > * osd: fix replay requeue when pg is still activating (#13116
> > > > > Samuel
> > > > > Just)
> > > > > * osd: fix scrub stat bugs (Sage Weil, Samuel Just)
> > > > > * osd: force promotion for ops EC can't handle (Zhiqiang Wang)
> > > > > * osd: improve behavior on machines with large memory pages
> > > > > (Steve
> > > > > Capper)
> > > > > * osd: merge multiple setattr calls into a setattrs call (Xinxin
> > > > > Shu)
> > > > > * osd: newstore prototype (Sage Weil)
> > > > > * osd: ObjectStore internal API refactor (Sage Weil)
> > > > > * osd: SHEC no longer experimental
> > > > > * osd: throttle evict ops (Yunchuan Wen)
> > > > > * osd: upgrades must pass through hammer (Sage Weil)
> > > > > * osd: use SEEK_HOLE / SEEK_DATA for sparse copy (Xinxin Shu)
> > > > > * rbd: rbd-replay-prep and rbd-replay improvements (Jason
> > > > > Dillaman)
> > > > > * rgw: expose the number of unhealthy workers through admin
> > > > > socket (Guang
> > > > > Yang)
> > > > > * rgw: fix casing of Content-Type header (Robin H. Johnson)
> > > > > * rgw: fix decoding of X-Object-Manifest from GET on Swift DLO
> > > > > (Radslow
> > > > > Rzarzynski)
> > > > > * rgw: fix sysvinit script
> > > > > * rgw: fix sysvinit script w/ multiple instances (Sage Weil,
> > > > > Pavan
> > > > > Rallabhandi)
> > > > > * rgw: improve handling of already removed buckets in expirer
> > > > > (Radoslaw
> > > > > Rzarzynski)
> > > > > * rgw: log to /var/log/ceph instead of /var/log/radosgw
> > > > > * rgw: rework X-Trans-Id header to be conform with Swift API
> > > > > (Radoslaw
> > > > > Rzarzynski)
> > > > > * rgw: s3 encoding-type for get bucket (Jeff Weber)
> > > > > * rgw: set max buckets per user in ceph.conf (Vikhyat Umrao)
> > > > > * rgw: support for Swift expiration API (Radoslaw Rzarzynski,
> > > > > Yehuda
> > > > > Sadeh)
> > > > > * rgw: user rm is idempotent (Orit Wasserman)
> > > > > * selinux policy (Boris Ranto, Milan Broz)
> > > > > * systemd: many fixes (Sage Weil, Owen Synge, Boris Ranto, Dan
> > > > > van der
> > > > > Ster)
> > > > > * systemd: run daemons as user ceph
> > > > >
> > > > > Getting Ceph
> > > > > ------------
> > > > >
> > > > > * Git at git://github.com/ceph/ceph.git
> > > > > * Tarball at http://download.ceph.com/tarballs/ceph-9.1.0.tar.gz
> > > > > * For packages, see
> > > > > http://ceph.com/docs/master/install/get-packages
> > > > > * For ceph-deploy, see
> > > > > http://ceph.com/docs/master/install/install-ceph-
> > > > > deploy
> > > > > --
> > > > > To unsubscribe from this list: send the line "unsubscribe
> > > > > ceph-devel" in the body of a message to
> > > > > majordomo@vger.kernel.org More majordomo info at
> > > > > http://vger.kernel.org/majordomo-info.html
> > > >
> > > >
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> > in the body of a message to majordomo@vger.kernel.org More majordomo
> > info at  http://vger.kernel.org/majordomo-info.html
> >
> >

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: v9.1.0 Infernalis release candidate released
  2015-10-14 21:23         ` Deneau, Tom
@ 2015-10-14 21:29           ` Sage Weil
  2015-10-14 22:08             ` Deneau, Tom
  0 siblings, 1 reply; 16+ messages in thread
From: Sage Weil @ 2015-10-14 21:29 UTC (permalink / raw)
  To: Deneau, Tom; +Cc: ceph-devel

On Wed, 14 Oct 2015, Deneau, Tom wrote:
> > -----Original Message-----
> > From: Sage Weil [mailto:sage@newdream.net]
> > Sent: Wednesday, October 14, 2015 3:59 PM
> > To: Deneau, Tom
> > Cc: ceph-devel@vger.kernel.org
> > Subject: RE: v9.1.0 Infernalis release candidate released
> > 
> > On Wed, 14 Oct 2015, Deneau, Tom wrote:
> > > Trying to bring up a cluster using the pre-built binary packages on
> > Ubuntu Trusty:
> > > Installed using "ceph-deploy install --dev infernalis `hostname`"
> > >
> > > This install seemed to work but then when I later tried
> > >    ceph-deploy --overwrite-conf mon create-initial it failed with
> > > [][INFO  ] Running command: ceph --cluster=ceph --admin-daemon
> > > /var/run/ceph/ceph-mon.myhost.asok \ mon_status [][ERROR ]
> > > admin_socket: exception getting command descriptions: [Errno 2] No
> > > such file or directory
> > >
> > > and indeed /var/run/ceph was empty.
> > >
> > > I wasn't sure if this was due to an existing user named ceph (I hadn't
> > > checked) but I did a userdel of ceph and ceph-deploy uninstall and
> > reinstall.
> > >
> > > Now the install part is getting an error near where it tries to create
> > the ceph user.
> > >
> > > [][DEBUG ] Adding system user ceph....done [][DEBUG ] Setting system
> > > user ceph properties..Processing triggers for libc-bin (2.19-0ubuntu6.6)
> > ...
> > > [][WARNIN] usermod: user 'ceph' does not exist
> > >
> > > Any suggestions for recovering from this situation?
> > 
> > I'm guessing this is.. trusty?  Did you remove the package, then veirfy
> > the user is deleted, then (re)install?  You may need to do a dpkg purge
> > (not just uninstall/remove) to make it forget its state...
> > 
> > I'm re-running the ceph-deploy test suite (centos7, trusty) to make sure
> > nothing is awry...
> > 
> > 	http://pulpito.ceph.com/sage-2015-10-14_13:55:41-ceph-deploy-
> > infernalis---basic-vps/
> > 
> > sage
> > 
> 
> Yes, did the steps above including purge.
> Could I just manually create the ceph user to get around this?

You could, but since the above tests just passed, I'm super curious why 
it's failing for you.  This is the relevant piece of code:

	https://github.com/ceph/ceph/blob/infernalis/debian/ceph-common.postinst#L60

After it fails, is ceph in /etc/passwd?  Is there anything in 
/etc/default/ceph that could be clobbering the defaults?

sage



> 
> -- Tom
> 
> > 
> > > -- Tom
> > >
> > > > -----Original Message-----
> > > > From: Sage Weil [mailto:sage@newdream.net]
> > > > Sent: Wednesday, October 14, 2015 12:40 PM
> > > > To: Deneau, Tom
> > > > Cc: ceph-devel@vger.kernel.org
> > > > Subject: RE: v9.1.0 Infernalis release candidate released
> > > >
> > > > On Wed, 14 Oct 2015, Deneau, Tom wrote:
> > > > > I tried an rpmbuild on Fedora21 from the tarball which seemed to
> > > > > work
> > > > ok.
> > > > > But having trouble doing "ceph-deploy --overwrite-conf mon create-
> > > > initial" with 9.1.0".
> > > > > This is using ceph-deploy version 1.5.24.
> > > > > Is this part of the "needs Fedora 22 or later" story?
> > > >
> > > > Yeah I think so, but it's probably mostly a "tested fc22 and it
> > worked"
> > > > situation.  THis is probably what is failing:
> > > >
> > > > https://github.com/ceph/ceph-
> > > > deploy/blob/master/ceph_deploy/hosts/fedora/__init__.py#L21
> > > >
> > > > So maybe the specfile isn't using systemd for fc21?
> > > >
> > > > sage
> > > >
> > > >
> > > > >
> > > > > -- Tom
> > > > >
> > > > > [myhost][DEBUG ] create a done file to avoid re-doing the mon
> > > > > deployment [myhost][DEBUG ] create the init path if it does not
> > > > > exist [myhost][DEBUG ] locating the `service` executable...
> > > > > [myhost][INFO  ] Running command: /usr/sbin/service ceph -c
> > > > > /etc/ceph/ceph.conf start mon.myhost [myhost][WARNIN] The service
> > > > > command supports only basic LSB actions (start, stop, restart,
> > > > > try-
> > > > restart, reload, force-reload, sta\ tus). For other actions, please
> > > > try to use systemctl.
> > > > > [myhost][ERROR ] RuntimeError: command returned non-zero exit
> > status:
> > > > > 2 [ceph_deploy.mon][ERROR ] Failed to execute command:
> > > > > /usr/sbin/service ceph -c /etc/ceph/ceph.conf start mon.myhost
> > > > > [ceph_deploy][ERROR ] GenericError: Failed to create 1 monitors
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-
> > > > > > owner@vger.kernel.org] On Behalf Of Sage Weil
> > > > > > Sent: Tuesday, October 13, 2015 4:02 PM
> > > > > > To: ceph-announce@ceph.com; ceph-devel@vger.kernel.org; ceph-
> > > > > > users@ceph.com; ceph-maintainers@ceph.com
> > > > > > Subject: v9.1.0 Infernalis release candidate released
> > > > > >
> > > > > > This is the first Infernalis release candidate.  There have been
> > > > > > some major changes since hammer, and the upgrade process is non-
> > > > trivial.
> > > > > > Please read carefully.
> > > > > >
> > > > > > Getting the release candidate
> > > > > > -----------------------------
> > > > > >
> > > > > > The v9.1.0 packages are pushed to the development release
> > > > repositories::
> > > > > >
> > > > > >   http://download.ceph.com/rpm-testing
> > > > > >   http://download.ceph.com/debian-testing
> > > > > >
> > > > > > For for info, see::
> > > > > >
> > > > > >   http://docs.ceph.com/docs/master/install/get-packages/
> > > > > >
> > > > > > Or install with ceph-deploy via::
> > > > > >
> > > > > >   ceph-deploy install --testing HOST
> > > > > >
> > > > > > Known issues
> > > > > > ------------
> > > > > >
> > > > > > * librbd and librados ABI compatibility is broken.  Be careful
> > > > > >   installing this RC on client machines (e.g., those running
> > qemu).
> > > > > >   It will be fixed in the final v9.2.0 release.
> > > > > >
> > > > > > Major Changes from Hammer
> > > > > > -------------------------
> > > > > >
> > > > > > * *General*:
> > > > > >   * Ceph daemons are now managed via systemd (with the exception
> > of
> > > > > >     Ubuntu Trusty, which still uses upstart).
> > > > > >   * Ceph daemons run as 'ceph' user instead root.
> > > > > >   * On Red Hat distros, there is also an SELinux policy.
> > > > > > * *RADOS*:
> > > > > >   * The RADOS cache tier can now proxy write operations to the
> > base
> > > > > >     tier, allowing writes to be handled without forcing migration
> > of
> > > > > >     an object into the cache.
> > > > > >   * The SHEC erasure coding support is no longer flagged as
> > > > > >     experimental. SHEC trades some additional storage space for
> > faster
> > > > > >     repair.
> > > > > >   * There is now a unified queue (and thus prioritization) of
> > client
> > > > > >     IO, recovery, scrubbing, and snapshot trimming.
> > > > > >   * There have been many improvements to low-level repair tooling
> > > > > >     (ceph-objectstore-tool).
> > > > > >   * The internal ObjectStore API has been significantly cleaned
> > > > > > up in order
> > > > > >     to faciliate new storage backends like NewStore.
> > > > > > * *RGW*:
> > > > > >   * The Swift API now supports object expiration.
> > > > > >   * There are many Swift API compatibility improvements.
> > > > > > * *RBD*:
> > > > > >   * The ``rbd du`` command shows actual usage (quickly, when
> > > > > >     object-map is enabled).
> > > > > >   * The object-map feature has seen many stability improvements.
> > > > > >   * Object-map and exclusive-lock features can be enabled or
> > disabled
> > > > > >     dynamically.
> > > > > >   * You can now store user metadata and set persistent librbd
> > options
> > > > > >     associated with individual images.
> > > > > >   * The new deep-flatten features allows flattening of a clone and
> > all
> > > > > >     of its snapshots.  (Previously snapshots could not be
> > flattened.)
> > > > > >   * The export-diff command command is now faster (it uses aio).
> > > > > > There is also
> > > > > >     a new fast-diff feature.
> > > > > >   * The --size argument can be specified with a suffix for units
> > > > > >     (e.g., ``--size 64G``).
> > > > > >   * There is a new ``rbd status`` command that, for now, shows who
> > has
> > > > > >     the image open/mapped.
> > > > > > * *CephFS*:
> > > > > >   * You can now rename snapshots.
> > > > > >   * There have been ongoing improvements around administration,
> > > > > > diagnostics,
> > > > > >     and the check and repair tools.
> > > > > >   * The caching and revocation of client cache state due to unused
> > > > > >     inodes has been dramatically improved.
> > > > > >   * The ceph-fuse client behaves better on 32-bit hosts.
> > > > > >
> > > > > > Distro compatibility
> > > > > > --------------------
> > > > > >
> > > > > > We have decided to drop support for many older distributions so
> > > > > > that we can move to a newer compiler toolchain (e.g., C++11).
> > > > > > Although it is still possible to build Ceph on older
> > > > > > distributions by installing backported development tools, we are
> > > > > > not building and publishing release packages for ceph.com.
> > > > > >
> > > > > > In particular,
> > > > > >
> > > > > > * CentOS 7 or later; we have dropped support for CentOS 6 (and
> > other
> > > > > >   RHEL 6 derivatives, like Scientific Linux 6).
> > > > > > * Debian Jessie 8.x or later; Debian Wheezy 7.x's g++ has
> > incomplete
> > > > > >   support for C++11 (and no systemd).
> > > > > > * Ubuntu Trusty 14.04 or later; Ubuntu Precise 12.04 is no longer
> > > > > >   supported.
> > > > > > * Fedora 22 or later.
> > > > > >
> > > > > > Upgrading from Firefly
> > > > > > ----------------------
> > > > > >
> > > > > > Upgrading directly from Firefly v0.80.z is not possible.  All
> > > > > > clusters must first upgrade to Hammer v0.94.4 or a later v0.94.z
> > > > > > release; only then is it possible to upgrade to Infernalis 9.2.z.
> > > > > >
> > > > > > Note that v0.94.4 isn't released yet, but you can upgrade to a
> > > > > > test build from gitbuilder with::
> > > > > >
> > > > > >   ceph-deploy install --dev hammer HOST
> > > > > >
> > > > > > The v0.94.4 Hammer point release will be out before v9.2.0
> > > > > > Infernalis
> > > > is.
> > > > > >
> > > > > > Upgrading from Hammer
> > > > > > ---------------------
> > > > > >
> > > > > > * For all distributions that support systemd (CentOS 7, Fedora,
> > Debian
> > > > > >   Jessie 8.x, OpenSUSE), ceph daemons are now managed using
> > > > > > native
> > > > systemd
> > > > > >   files instead of the legacy sysvinit scripts.  For example,::
> > > > > >
> > > > > >     systemctl start ceph.target       # start all daemons
> > > > > >     systemctl status ceph-osd@12      # check status of osd.12
> > > > > >
> > > > > >   The main notable distro that is *not* yet using systemd is
> > > > > > Ubuntu
> > > > trusty
> > > > > >   14.04.  (The next Ubuntu LTS, 16.04, will use systemd instead
> > > > > > of
> > > > > > upstart.)
> > > > > >
> > > > > > * Ceph daemons now run as user and group ``ceph`` by default.  The
> > > > > >   ceph user has a static UID assigned by Fedora and Debian (also
> > used
> > > > > >   by derivative distributions like RHEL/CentOS and Ubuntu).  On
> > SUSE
> > > > > >   the ceph user will currently get a dynamically assigned UID when
> > the
> > > > > >   user is created.
> > > > > >
> > > > > >   If your systems already have a ceph user, upgrading the
> > > > > > package will cause
> > > > > >   problems.  We suggest you first remove or rename the existing
> > 'ceph'
> > > > > > user
> > > > > >   before upgrading.
> > > > > >
> > > > > >   When upgrading, administrators have two options:
> > > > > >
> > > > > >    #. Add the following line to ``ceph.conf`` on all hosts::
> > > > > >
> > > > > >         setuser match path = /var/lib/ceph/$type/$cluster-$id
> > > > > >
> > > > > >       This will make the Ceph daemons run as root (i.e., not drop
> > > > > >       privileges and switch to user ceph) if the daemon's data
> > > > > >       directory is still owned by root.  Newly deployed daemons
> > will
> > > > > >       be created with data owned by user ceph and will run with
> > > > > >       reduced privileges, but upgraded daemons will continue to
> > run as
> > > > > >       root.
> > > > > >
> > > > > >    #. Fix the data ownership during the upgrade.  This is the
> > > > > > preferred option,
> > > > > >       but is more work.  The process for each host would be to:
> > > > > >
> > > > > >       #. Upgrade the ceph package.  This creates the ceph user
> > > > > > and
> > > > group.
> > > > > > For
> > > > > > 	 example::
> > > > > >
> > > > > > 	   ceph-deploy install --stable infernalis HOST
> > > > > >
> > > > > >       #. Stop the daemon(s).::
> > > > > >
> > > > > > 	   service ceph stop           # fedora, centos, rhel, debian
> > > > > > 	   stop ceph-all               # ubuntu
> > > > > >
> > > > > >       #. Fix the ownership::
> > > > > >
> > > > > > 	   chown -R ceph:ceph /var/lib/ceph
> > > > > >
> > > > > >       #. Restart the daemon(s).::
> > > > > >
> > > > > > 	   start ceph-all                # ubuntu
> > > > > > 	   systemctl start ceph.target   # debian, centos, fedora,
> > rhel
> > > > > >
> > > > > > * The on-disk format for the experimental KeyValueStore OSD
> > > > > > backend
> > > > has
> > > > > >   changed.  You will need to remove any OSDs using that backend
> > > > > > before
> > > > you
> > > > > >   upgrade any test clusters that use it.
> > > > > >
> > > > > > Upgrade notes
> > > > > > -------------
> > > > > >
> > > > > > * When a pool quota is reached, librados operations now block
> > > > > > indefinitely,
> > > > > >   the same way they do when the cluster fills up.  (Previously
> > > > > > they would return
> > > > > >   -ENOSPC).  By default, a full cluster or pool will now block.
> > > > > > If
> > > > your
> > > > > >   librados application can handle ENOSPC or EDQUOT errors
> > > > > > gracefully, you can
> > > > > >   get error returns instead by using the new librados
> > > > > > OPERATION_FULL_TRY flag.
> > > > > >
> > > > > > Notable changes
> > > > > > ---------------
> > > > > >
> > > > > > NOTE: These notes are somewhat abbreviated while we find a less
> > > > > > time- consuming process for generating them.
> > > > > >
> > > > > > * build: C++11 now supported
> > > > > > * build: many cmake improvements
> > > > > > * build: OSX build fixes (Yan, Zheng)
> > > > > > * build: remove rest-bench
> > > > > > * ceph-disk: many fixes (Loic Dachary)
> > > > > > * ceph-disk: support for multipath devices (Loic Dachary)
> > > > > > * ceph-fuse: mostly behave on 32-bit hosts (Yan, Zheng)
> > > > > > * ceph-objectstore-tool: many improvements (David Zafman)
> > > > > > * common: bufferlist performance tuning (Piotr Dalek, Sage Weil)
> > > > > > * common: make mutex more efficient
> > > > > > * common: some async compression infrastructure (Haomai Wang)
> > > > > > * librados: add FULL_TRY and FULL_FORCE flags for dealing with
> > > > > > full clusters or pools (Sage Weil)
> > > > > > * librados: fix notify completion race (#13114 Sage Weil)
> > > > > > * librados, libcephfs: randomize client nonces (Josh Durgin)
> > > > > > * librados: pybind: fix binary omap values (Robin H. Johnson)
> > > > > > * librbd: fix reads larger than the cache size (Lu Shi)
> > > > > > * librbd: metadata filter fixes (Haomai Wang)
> > > > > > * librbd: use write_full when possible (Zhiqiang Wang)
> > > > > > * mds: avoid emitting cap warnigns before evicting session (John
> > > > > > Spray)
> > > > > > * mds: fix expected holes in journal objects (#13167 Yan, Zheng)
> > > > > > * mds: fix SnapServer crash on deleted pool (John Spray)
> > > > > > * mds: many fixes (Yan, Zheng, John Spray, Greg Farnum)
> > > > > > * mon: add cache over MonitorDBStore (Kefu Chai)
> > > > > > * mon: 'ceph osd metadata' can dump all osds (Haomai Wang)
> > > > > > * mon: detect kv backend failures (Sage Weil)
> > > > > > * mon: fix CRUSH map test for new pools (Sage Weil)
> > > > > > * mon: fix min_last_epoch_clean tracking (Kefu Chai)
> > > > > > * mon: misc scaling fixes (Sage Weil)
> > > > > > * mon: streamline session handling, fix memory leaks (Sage Weil)
> > > > > > * mon: upgrades must pass through hammer (Sage Weil)
> > > > > > * msg/async: many fixes (Haomai Wang)
> > > > > > * osd: cache proxy-write support (Zhiqiang Wang, Samuel Just)
> > > > > > * osd: configure promotion based on write recency (Zhiqiang
> > > > > > Wang)
> > > > > > * osd: don't send dup MMonGetOSDMap requests (Sage Weil, Kefu
> > > > > > Chai)
> > > > > > * osd: erasure-code: fix SHEC floating point bug (#12936 Loic
> > > > > > Dachary)
> > > > > > * osd: erasure-code: update to ISA-L 2.14 (Yuan Zhou)
> > > > > > * osd: fix hitset object naming to use GMT (Kefu Chai)
> > > > > > * osd: fix misc memory leaks (Sage Weil)
> > > > > > * osd: fix peek_queue locking in FileStore (Xinze Chi)
> > > > > > * osd: fix promotion vs full cache tier (Samuel Just)
> > > > > > * osd: fix replay requeue when pg is still activating (#13116
> > > > > > Samuel
> > > > > > Just)
> > > > > > * osd: fix scrub stat bugs (Sage Weil, Samuel Just)
> > > > > > * osd: force promotion for ops EC can't handle (Zhiqiang Wang)
> > > > > > * osd: improve behavior on machines with large memory pages
> > > > > > (Steve
> > > > > > Capper)
> > > > > > * osd: merge multiple setattr calls into a setattrs call (Xinxin
> > > > > > Shu)
> > > > > > * osd: newstore prototype (Sage Weil)
> > > > > > * osd: ObjectStore internal API refactor (Sage Weil)
> > > > > > * osd: SHEC no longer experimental
> > > > > > * osd: throttle evict ops (Yunchuan Wen)
> > > > > > * osd: upgrades must pass through hammer (Sage Weil)
> > > > > > * osd: use SEEK_HOLE / SEEK_DATA for sparse copy (Xinxin Shu)
> > > > > > * rbd: rbd-replay-prep and rbd-replay improvements (Jason
> > > > > > Dillaman)
> > > > > > * rgw: expose the number of unhealthy workers through admin
> > > > > > socket (Guang
> > > > > > Yang)
> > > > > > * rgw: fix casing of Content-Type header (Robin H. Johnson)
> > > > > > * rgw: fix decoding of X-Object-Manifest from GET on Swift DLO
> > > > > > (Radslow
> > > > > > Rzarzynski)
> > > > > > * rgw: fix sysvinit script
> > > > > > * rgw: fix sysvinit script w/ multiple instances (Sage Weil,
> > > > > > Pavan
> > > > > > Rallabhandi)
> > > > > > * rgw: improve handling of already removed buckets in expirer
> > > > > > (Radoslaw
> > > > > > Rzarzynski)
> > > > > > * rgw: log to /var/log/ceph instead of /var/log/radosgw
> > > > > > * rgw: rework X-Trans-Id header to be conform with Swift API
> > > > > > (Radoslaw
> > > > > > Rzarzynski)
> > > > > > * rgw: s3 encoding-type for get bucket (Jeff Weber)
> > > > > > * rgw: set max buckets per user in ceph.conf (Vikhyat Umrao)
> > > > > > * rgw: support for Swift expiration API (Radoslaw Rzarzynski,
> > > > > > Yehuda
> > > > > > Sadeh)
> > > > > > * rgw: user rm is idempotent (Orit Wasserman)
> > > > > > * selinux policy (Boris Ranto, Milan Broz)
> > > > > > * systemd: many fixes (Sage Weil, Owen Synge, Boris Ranto, Dan
> > > > > > van der
> > > > > > Ster)
> > > > > > * systemd: run daemons as user ceph
> > > > > >
> > > > > > Getting Ceph
> > > > > > ------------
> > > > > >
> > > > > > * Git at git://github.com/ceph/ceph.git
> > > > > > * Tarball at http://download.ceph.com/tarballs/ceph-9.1.0.tar.gz
> > > > > > * For packages, see
> > > > > > http://ceph.com/docs/master/install/get-packages
> > > > > > * For ceph-deploy, see
> > > > > > http://ceph.com/docs/master/install/install-ceph-
> > > > > > deploy
> > > > > > --
> > > > > > To unsubscribe from this list: send the line "unsubscribe
> > > > > > ceph-devel" in the body of a message to
> > > > > > majordomo@vger.kernel.org More majordomo info at
> > > > > > http://vger.kernel.org/majordomo-info.html
> > > > >
> > > > >
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> > > in the body of a message to majordomo@vger.kernel.org More majordomo
> > > info at  http://vger.kernel.org/majordomo-info.html
> > >
> > >
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: v9.1.0 Infernalis release candidate released
  2015-10-14 21:29           ` Sage Weil
@ 2015-10-14 22:08             ` Deneau, Tom
  0 siblings, 0 replies; 16+ messages in thread
From: Deneau, Tom @ 2015-10-14 22:08 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel



> -----Original Message-----
> From: Sage Weil [mailto:sage@newdream.net]
> Sent: Wednesday, October 14, 2015 4:30 PM
> To: Deneau, Tom
> Cc: ceph-devel@vger.kernel.org
> Subject: RE: v9.1.0 Infernalis release candidate released
> 
> On Wed, 14 Oct 2015, Deneau, Tom wrote:
> > > -----Original Message-----
> > > From: Sage Weil [mailto:sage@newdream.net]
> > > Sent: Wednesday, October 14, 2015 3:59 PM
> > > To: Deneau, Tom
> > > Cc: ceph-devel@vger.kernel.org
> > > Subject: RE: v9.1.0 Infernalis release candidate released
> > >
> > > On Wed, 14 Oct 2015, Deneau, Tom wrote:
> > > > Trying to bring up a cluster using the pre-built binary packages
> > > > on
> > > Ubuntu Trusty:
> > > > Installed using "ceph-deploy install --dev infernalis `hostname`"
> > > >
> > > > This install seemed to work but then when I later tried
> > > >    ceph-deploy --overwrite-conf mon create-initial it failed with
> > > > [][INFO  ] Running command: ceph --cluster=ceph --admin-daemon
> > > > /var/run/ceph/ceph-mon.myhost.asok \ mon_status [][ERROR ]
> > > > admin_socket: exception getting command descriptions: [Errno 2] No
> > > > such file or directory
> > > >
> > > > and indeed /var/run/ceph was empty.
> > > >
> > > > I wasn't sure if this was due to an existing user named ceph (I
> > > > hadn't
> > > > checked) but I did a userdel of ceph and ceph-deploy uninstall and
> > > reinstall.
> > > >
> > > > Now the install part is getting an error near where it tries to
> > > > create
> > > the ceph user.
> > > >
> > > > [][DEBUG ] Adding system user ceph....done [][DEBUG ] Setting
> > > > system user ceph properties..Processing triggers for libc-bin
> > > > (2.19-0ubuntu6.6)
> > > ...
> > > > [][WARNIN] usermod: user 'ceph' does not exist
> > > >
> > > > Any suggestions for recovering from this situation?
> > >
> > > I'm guessing this is.. trusty?  Did you remove the package, then
> > > veirfy the user is deleted, then (re)install?  You may need to do a
> > > dpkg purge (not just uninstall/remove) to make it forget its state...
> > >
> > > I'm re-running the ceph-deploy test suite (centos7, trusty) to make
> > > sure nothing is awry...
> > >
> > > 	http://pulpito.ceph.com/sage-2015-10-14_13:55:41-ceph-deploy-
> > > infernalis---basic-vps/
> > >
> > > sage
> > >
> >
> > Yes, did the steps above including purge.
> > Could I just manually create the ceph user to get around this?
> 
> You could, but since the above tests just passed, I'm super curious why
> it's failing for you.  This is the relevant piece of code:
> 
> 	https://github.com/ceph/ceph/blob/infernalis/debian/ceph-
> common.postinst#L60
> 
> After it fails, is ceph in /etc/passwd?  Is there anything in
> /etc/default/ceph that could be clobbering the defaults?
> 
> sage
> 
> 
Ah, I see part of the problem was that the old user ceph was also part of group ceph
and I had not done a groupdel of group ceph.  Having done that, the user creation during install
now works and I see in /etc/passwd
  ceph:x:64045:64045:Ceph storage service:/var/lib/ceph:/bin/false


I am still getting this error however from " ceph-deploy --overwrite-conf mon create-initial"

[INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.Intel-2P-Sandy-Bridge-04.asok 
mon_status
[ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory

I see /var/run/ceph directory is there and owned by user ceph but it is empty.

I will keep poking around.

-- Tom


> 
> >
> > -- Tom
> >
> > >
> > > > -- Tom
> > > >
> > > > > -----Original Message-----
> > > > > From: Sage Weil [mailto:sage@newdream.net]
> > > > > Sent: Wednesday, October 14, 2015 12:40 PM
> > > > > To: Deneau, Tom
> > > > > Cc: ceph-devel@vger.kernel.org
> > > > > Subject: RE: v9.1.0 Infernalis release candidate released
> > > > >
> > > > > On Wed, 14 Oct 2015, Deneau, Tom wrote:
> > > > > > I tried an rpmbuild on Fedora21 from the tarball which seemed
> > > > > > to work
> > > > > ok.
> > > > > > But having trouble doing "ceph-deploy --overwrite-conf mon
> > > > > > create-
> > > > > initial" with 9.1.0".
> > > > > > This is using ceph-deploy version 1.5.24.
> > > > > > Is this part of the "needs Fedora 22 or later" story?
> > > > >
> > > > > Yeah I think so, but it's probably mostly a "tested fc22 and it
> > > worked"
> > > > > situation.  THis is probably what is failing:
> > > > >
> > > > > https://github.com/ceph/ceph-
> > > > > deploy/blob/master/ceph_deploy/hosts/fedora/__init__.py#L21
> > > > >
> > > > > So maybe the specfile isn't using systemd for fc21?
> > > > >
> > > > > sage
> > > > >
> > > > >
> > > > > >
> > > > > > -- Tom
> > > > > >
> > > > > > [myhost][DEBUG ] create a done file to avoid re-doing the mon
> > > > > > deployment [myhost][DEBUG ] create the init path if it does
> > > > > > not exist [myhost][DEBUG ] locating the `service` executable...
> > > > > > [myhost][INFO  ] Running command: /usr/sbin/service ceph -c
> > > > > > /etc/ceph/ceph.conf start mon.myhost [myhost][WARNIN] The
> > > > > > service command supports only basic LSB actions (start, stop,
> > > > > > restart,
> > > > > > try-
> > > > > restart, reload, force-reload, sta\ tus). For other actions,
> > > > > please try to use systemctl.
> > > > > > [myhost][ERROR ] RuntimeError: command returned non-zero exit
> > > status:
> > > > > > 2 [ceph_deploy.mon][ERROR ] Failed to execute command:
> > > > > > /usr/sbin/service ceph -c /etc/ceph/ceph.conf start mon.myhost
> > > > > > [ceph_deploy][ERROR ] GenericError: Failed to create 1
> > > > > > monitors
> > > > > >
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-
> > > > > > > owner@vger.kernel.org] On Behalf Of Sage Weil
> > > > > > > Sent: Tuesday, October 13, 2015 4:02 PM
> > > > > > > To: ceph-announce@ceph.com; ceph-devel@vger.kernel.org;
> > > > > > > ceph- users@ceph.com; ceph-maintainers@ceph.com
> > > > > > > Subject: v9.1.0 Infernalis release candidate released
> > > > > > >
> > > > > > > This is the first Infernalis release candidate.  There have
> > > > > > > been some major changes since hammer, and the upgrade
> > > > > > > process is non-
> > > > > trivial.
> > > > > > > Please read carefully.
> > > > > > >
> > > > > > > Getting the release candidate
> > > > > > > -----------------------------
> > > > > > >
> > > > > > > The v9.1.0 packages are pushed to the development release
> > > > > repositories::
> > > > > > >
> > > > > > >   http://download.ceph.com/rpm-testing
> > > > > > >   http://download.ceph.com/debian-testing
> > > > > > >
> > > > > > > For for info, see::
> > > > > > >
> > > > > > >   http://docs.ceph.com/docs/master/install/get-packages/
> > > > > > >
> > > > > > > Or install with ceph-deploy via::
> > > > > > >
> > > > > > >   ceph-deploy install --testing HOST
> > > > > > >
> > > > > > > Known issues
> > > > > > > ------------
> > > > > > >
> > > > > > > * librbd and librados ABI compatibility is broken.  Be careful
> > > > > > >   installing this RC on client machines (e.g., those running
> > > qemu).
> > > > > > >   It will be fixed in the final v9.2.0 release.
> > > > > > >
> > > > > > > Major Changes from Hammer
> > > > > > > -------------------------
> > > > > > >
> > > > > > > * *General*:
> > > > > > >   * Ceph daemons are now managed via systemd (with the
> > > > > > > exception
> > > of
> > > > > > >     Ubuntu Trusty, which still uses upstart).
> > > > > > >   * Ceph daemons run as 'ceph' user instead root.
> > > > > > >   * On Red Hat distros, there is also an SELinux policy.
> > > > > > > * *RADOS*:
> > > > > > >   * The RADOS cache tier can now proxy write operations to
> > > > > > > the
> > > base
> > > > > > >     tier, allowing writes to be handled without forcing
> > > > > > > migration
> > > of
> > > > > > >     an object into the cache.
> > > > > > >   * The SHEC erasure coding support is no longer flagged as
> > > > > > >     experimental. SHEC trades some additional storage space
> > > > > > > for
> > > faster
> > > > > > >     repair.
> > > > > > >   * There is now a unified queue (and thus prioritization)
> > > > > > > of
> > > client
> > > > > > >     IO, recovery, scrubbing, and snapshot trimming.
> > > > > > >   * There have been many improvements to low-level repair
> tooling
> > > > > > >     (ceph-objectstore-tool).
> > > > > > >   * The internal ObjectStore API has been significantly
> > > > > > > cleaned up in order
> > > > > > >     to faciliate new storage backends like NewStore.
> > > > > > > * *RGW*:
> > > > > > >   * The Swift API now supports object expiration.
> > > > > > >   * There are many Swift API compatibility improvements.
> > > > > > > * *RBD*:
> > > > > > >   * The ``rbd du`` command shows actual usage (quickly, when
> > > > > > >     object-map is enabled).
> > > > > > >   * The object-map feature has seen many stability
> improvements.
> > > > > > >   * Object-map and exclusive-lock features can be enabled or
> > > disabled
> > > > > > >     dynamically.
> > > > > > >   * You can now store user metadata and set persistent
> > > > > > > librbd
> > > options
> > > > > > >     associated with individual images.
> > > > > > >   * The new deep-flatten features allows flattening of a
> > > > > > > clone and
> > > all
> > > > > > >     of its snapshots.  (Previously snapshots could not be
> > > flattened.)
> > > > > > >   * The export-diff command command is now faster (it uses
> aio).
> > > > > > > There is also
> > > > > > >     a new fast-diff feature.
> > > > > > >   * The --size argument can be specified with a suffix for
> units
> > > > > > >     (e.g., ``--size 64G``).
> > > > > > >   * There is a new ``rbd status`` command that, for now,
> > > > > > > shows who
> > > has
> > > > > > >     the image open/mapped.
> > > > > > > * *CephFS*:
> > > > > > >   * You can now rename snapshots.
> > > > > > >   * There have been ongoing improvements around
> > > > > > > administration, diagnostics,
> > > > > > >     and the check and repair tools.
> > > > > > >   * The caching and revocation of client cache state due to
> unused
> > > > > > >     inodes has been dramatically improved.
> > > > > > >   * The ceph-fuse client behaves better on 32-bit hosts.
> > > > > > >
> > > > > > > Distro compatibility
> > > > > > > --------------------
> > > > > > >
> > > > > > > We have decided to drop support for many older distributions
> > > > > > > so that we can move to a newer compiler toolchain (e.g.,
> C++11).
> > > > > > > Although it is still possible to build Ceph on older
> > > > > > > distributions by installing backported development tools, we
> > > > > > > are not building and publishing release packages for ceph.com.
> > > > > > >
> > > > > > > In particular,
> > > > > > >
> > > > > > > * CentOS 7 or later; we have dropped support for CentOS 6
> > > > > > > (and
> > > other
> > > > > > >   RHEL 6 derivatives, like Scientific Linux 6).
> > > > > > > * Debian Jessie 8.x or later; Debian Wheezy 7.x's g++ has
> > > incomplete
> > > > > > >   support for C++11 (and no systemd).
> > > > > > > * Ubuntu Trusty 14.04 or later; Ubuntu Precise 12.04 is no
> longer
> > > > > > >   supported.
> > > > > > > * Fedora 22 or later.
> > > > > > >
> > > > > > > Upgrading from Firefly
> > > > > > > ----------------------
> > > > > > >
> > > > > > > Upgrading directly from Firefly v0.80.z is not possible.
> > > > > > > All clusters must first upgrade to Hammer v0.94.4 or a later
> > > > > > > v0.94.z release; only then is it possible to upgrade to
> Infernalis 9.2.z.
> > > > > > >
> > > > > > > Note that v0.94.4 isn't released yet, but you can upgrade to
> > > > > > > a test build from gitbuilder with::
> > > > > > >
> > > > > > >   ceph-deploy install --dev hammer HOST
> > > > > > >
> > > > > > > The v0.94.4 Hammer point release will be out before v9.2.0
> > > > > > > Infernalis
> > > > > is.
> > > > > > >
> > > > > > > Upgrading from Hammer
> > > > > > > ---------------------
> > > > > > >
> > > > > > > * For all distributions that support systemd (CentOS 7,
> > > > > > > Fedora,
> > > Debian
> > > > > > >   Jessie 8.x, OpenSUSE), ceph daemons are now managed using
> > > > > > > native
> > > > > systemd
> > > > > > >   files instead of the legacy sysvinit scripts.  For
> example,::
> > > > > > >
> > > > > > >     systemctl start ceph.target       # start all daemons
> > > > > > >     systemctl status ceph-osd@12      # check status of osd.12
> > > > > > >
> > > > > > >   The main notable distro that is *not* yet using systemd is
> > > > > > > Ubuntu
> > > > > trusty
> > > > > > >   14.04.  (The next Ubuntu LTS, 16.04, will use systemd
> > > > > > > instead of
> > > > > > > upstart.)
> > > > > > >
> > > > > > > * Ceph daemons now run as user and group ``ceph`` by default.
> The
> > > > > > >   ceph user has a static UID assigned by Fedora and Debian
> > > > > > > (also
> > > used
> > > > > > >   by derivative distributions like RHEL/CentOS and Ubuntu).
> > > > > > > On
> > > SUSE
> > > > > > >   the ceph user will currently get a dynamically assigned
> > > > > > > UID when
> > > the
> > > > > > >   user is created.
> > > > > > >
> > > > > > >   If your systems already have a ceph user, upgrading the
> > > > > > > package will cause
> > > > > > >   problems.  We suggest you first remove or rename the
> > > > > > > existing
> > > 'ceph'
> > > > > > > user
> > > > > > >   before upgrading.
> > > > > > >
> > > > > > >   When upgrading, administrators have two options:
> > > > > > >
> > > > > > >    #. Add the following line to ``ceph.conf`` on all hosts::
> > > > > > >
> > > > > > >         setuser match path =
> > > > > > > /var/lib/ceph/$type/$cluster-$id
> > > > > > >
> > > > > > >       This will make the Ceph daemons run as root (i.e., not
> drop
> > > > > > >       privileges and switch to user ceph) if the daemon's data
> > > > > > >       directory is still owned by root.  Newly deployed
> > > > > > > daemons
> > > will
> > > > > > >       be created with data owned by user ceph and will run
> with
> > > > > > >       reduced privileges, but upgraded daemons will continue
> > > > > > > to
> > > run as
> > > > > > >       root.
> > > > > > >
> > > > > > >    #. Fix the data ownership during the upgrade.  This is
> > > > > > > the preferred option,
> > > > > > >       but is more work.  The process for each host would be
> to:
> > > > > > >
> > > > > > >       #. Upgrade the ceph package.  This creates the ceph
> > > > > > > user and
> > > > > group.
> > > > > > > For
> > > > > > > 	 example::
> > > > > > >
> > > > > > > 	   ceph-deploy install --stable infernalis HOST
> > > > > > >
> > > > > > >       #. Stop the daemon(s).::
> > > > > > >
> > > > > > > 	   service ceph stop           # fedora, centos, rhel, debian
> > > > > > > 	   stop ceph-all               # ubuntu
> > > > > > >
> > > > > > >       #. Fix the ownership::
> > > > > > >
> > > > > > > 	   chown -R ceph:ceph /var/lib/ceph
> > > > > > >
> > > > > > >       #. Restart the daemon(s).::
> > > > > > >
> > > > > > > 	   start ceph-all                # ubuntu
> > > > > > > 	   systemctl start ceph.target   # debian, centos, fedora,
> > > rhel
> > > > > > >
> > > > > > > * The on-disk format for the experimental KeyValueStore OSD
> > > > > > > backend
> > > > > has
> > > > > > >   changed.  You will need to remove any OSDs using that
> > > > > > > backend before
> > > > > you
> > > > > > >   upgrade any test clusters that use it.
> > > > > > >
> > > > > > > Upgrade notes
> > > > > > > -------------
> > > > > > >
> > > > > > > * When a pool quota is reached, librados operations now
> > > > > > > block indefinitely,
> > > > > > >   the same way they do when the cluster fills up.
> > > > > > > (Previously they would return
> > > > > > >   -ENOSPC).  By default, a full cluster or pool will now
> block.
> > > > > > > If
> > > > > your
> > > > > > >   librados application can handle ENOSPC or EDQUOT errors
> > > > > > > gracefully, you can
> > > > > > >   get error returns instead by using the new librados
> > > > > > > OPERATION_FULL_TRY flag.
> > > > > > >
> > > > > > > Notable changes
> > > > > > > ---------------
> > > > > > >
> > > > > > > NOTE: These notes are somewhat abbreviated while we find a
> > > > > > > less
> > > > > > > time- consuming process for generating them.
> > > > > > >
> > > > > > > * build: C++11 now supported
> > > > > > > * build: many cmake improvements
> > > > > > > * build: OSX build fixes (Yan, Zheng)
> > > > > > > * build: remove rest-bench
> > > > > > > * ceph-disk: many fixes (Loic Dachary)
> > > > > > > * ceph-disk: support for multipath devices (Loic Dachary)
> > > > > > > * ceph-fuse: mostly behave on 32-bit hosts (Yan, Zheng)
> > > > > > > * ceph-objectstore-tool: many improvements (David Zafman)
> > > > > > > * common: bufferlist performance tuning (Piotr Dalek, Sage
> > > > > > > Weil)
> > > > > > > * common: make mutex more efficient
> > > > > > > * common: some async compression infrastructure (Haomai
> > > > > > > Wang)
> > > > > > > * librados: add FULL_TRY and FULL_FORCE flags for dealing
> > > > > > > with full clusters or pools (Sage Weil)
> > > > > > > * librados: fix notify completion race (#13114 Sage Weil)
> > > > > > > * librados, libcephfs: randomize client nonces (Josh Durgin)
> > > > > > > * librados: pybind: fix binary omap values (Robin H.
> > > > > > > Johnson)
> > > > > > > * librbd: fix reads larger than the cache size (Lu Shi)
> > > > > > > * librbd: metadata filter fixes (Haomai Wang)
> > > > > > > * librbd: use write_full when possible (Zhiqiang Wang)
> > > > > > > * mds: avoid emitting cap warnigns before evicting session
> > > > > > > (John
> > > > > > > Spray)
> > > > > > > * mds: fix expected holes in journal objects (#13167 Yan,
> > > > > > > Zheng)
> > > > > > > * mds: fix SnapServer crash on deleted pool (John Spray)
> > > > > > > * mds: many fixes (Yan, Zheng, John Spray, Greg Farnum)
> > > > > > > * mon: add cache over MonitorDBStore (Kefu Chai)
> > > > > > > * mon: 'ceph osd metadata' can dump all osds (Haomai Wang)
> > > > > > > * mon: detect kv backend failures (Sage Weil)
> > > > > > > * mon: fix CRUSH map test for new pools (Sage Weil)
> > > > > > > * mon: fix min_last_epoch_clean tracking (Kefu Chai)
> > > > > > > * mon: misc scaling fixes (Sage Weil)
> > > > > > > * mon: streamline session handling, fix memory leaks (Sage
> > > > > > > Weil)
> > > > > > > * mon: upgrades must pass through hammer (Sage Weil)
> > > > > > > * msg/async: many fixes (Haomai Wang)
> > > > > > > * osd: cache proxy-write support (Zhiqiang Wang, Samuel
> > > > > > > Just)
> > > > > > > * osd: configure promotion based on write recency (Zhiqiang
> > > > > > > Wang)
> > > > > > > * osd: don't send dup MMonGetOSDMap requests (Sage Weil,
> > > > > > > Kefu
> > > > > > > Chai)
> > > > > > > * osd: erasure-code: fix SHEC floating point bug (#12936
> > > > > > > Loic
> > > > > > > Dachary)
> > > > > > > * osd: erasure-code: update to ISA-L 2.14 (Yuan Zhou)
> > > > > > > * osd: fix hitset object naming to use GMT (Kefu Chai)
> > > > > > > * osd: fix misc memory leaks (Sage Weil)
> > > > > > > * osd: fix peek_queue locking in FileStore (Xinze Chi)
> > > > > > > * osd: fix promotion vs full cache tier (Samuel Just)
> > > > > > > * osd: fix replay requeue when pg is still activating
> > > > > > > (#13116 Samuel
> > > > > > > Just)
> > > > > > > * osd: fix scrub stat bugs (Sage Weil, Samuel Just)
> > > > > > > * osd: force promotion for ops EC can't handle (Zhiqiang
> > > > > > > Wang)
> > > > > > > * osd: improve behavior on machines with large memory pages
> > > > > > > (Steve
> > > > > > > Capper)
> > > > > > > * osd: merge multiple setattr calls into a setattrs call
> > > > > > > (Xinxin
> > > > > > > Shu)
> > > > > > > * osd: newstore prototype (Sage Weil)
> > > > > > > * osd: ObjectStore internal API refactor (Sage Weil)
> > > > > > > * osd: SHEC no longer experimental
> > > > > > > * osd: throttle evict ops (Yunchuan Wen)
> > > > > > > * osd: upgrades must pass through hammer (Sage Weil)
> > > > > > > * osd: use SEEK_HOLE / SEEK_DATA for sparse copy (Xinxin
> > > > > > > Shu)
> > > > > > > * rbd: rbd-replay-prep and rbd-replay improvements (Jason
> > > > > > > Dillaman)
> > > > > > > * rgw: expose the number of unhealthy workers through admin
> > > > > > > socket (Guang
> > > > > > > Yang)
> > > > > > > * rgw: fix casing of Content-Type header (Robin H. Johnson)
> > > > > > > * rgw: fix decoding of X-Object-Manifest from GET on Swift
> > > > > > > DLO (Radslow
> > > > > > > Rzarzynski)
> > > > > > > * rgw: fix sysvinit script
> > > > > > > * rgw: fix sysvinit script w/ multiple instances (Sage Weil,
> > > > > > > Pavan
> > > > > > > Rallabhandi)
> > > > > > > * rgw: improve handling of already removed buckets in
> > > > > > > expirer (Radoslaw
> > > > > > > Rzarzynski)
> > > > > > > * rgw: log to /var/log/ceph instead of /var/log/radosgw
> > > > > > > * rgw: rework X-Trans-Id header to be conform with Swift API
> > > > > > > (Radoslaw
> > > > > > > Rzarzynski)
> > > > > > > * rgw: s3 encoding-type for get bucket (Jeff Weber)
> > > > > > > * rgw: set max buckets per user in ceph.conf (Vikhyat Umrao)
> > > > > > > * rgw: support for Swift expiration API (Radoslaw
> > > > > > > Rzarzynski, Yehuda
> > > > > > > Sadeh)
> > > > > > > * rgw: user rm is idempotent (Orit Wasserman)
> > > > > > > * selinux policy (Boris Ranto, Milan Broz)
> > > > > > > * systemd: many fixes (Sage Weil, Owen Synge, Boris Ranto,
> > > > > > > Dan van der
> > > > > > > Ster)
> > > > > > > * systemd: run daemons as user ceph
> > > > > > >
> > > > > > > Getting Ceph
> > > > > > > ------------
> > > > > > >
> > > > > > > * Git at git://github.com/ceph/ceph.git
> > > > > > > * Tarball at
> > > > > > > http://download.ceph.com/tarballs/ceph-9.1.0.tar.gz
> > > > > > > * For packages, see
> > > > > > > http://ceph.com/docs/master/install/get-packages
> > > > > > > * For ceph-deploy, see
> > > > > > > http://ceph.com/docs/master/install/install-ceph-
> > > > > > > deploy
> > > > > > > --
> > > > > > > To unsubscribe from this list: send the line "unsubscribe
> > > > > > > ceph-devel" in the body of a message to
> > > > > > > majordomo@vger.kernel.org More majordomo info at
> > > > > > > http://vger.kernel.org/majordomo-info.html
> > > > > >
> > > > > >
> > > > --
> > > > To unsubscribe from this list: send the line "unsubscribe ceph-
> devel"
> > > > in the body of a message to majordomo@vger.kernel.org More
> > > > majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > >
> > > >
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> > in the body of a message to majordomo@vger.kernel.org More majordomo
> > info at  http://vger.kernel.org/majordomo-info.html
> >
> >

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: v9.1.0 Infernalis release candidate released
       [not found]         ` <alpine.DEB.2.00.1510140527120.6589-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
@ 2015-10-14 23:56           ` Goncalo Borges
       [not found]             ` <561EEBAA.6070707-JAjqph6Yjy/r8pF6qxatHrpzq4S04n8Q@public.gmane.org>
  0 siblings, 1 reply; 16+ messages in thread
From: Goncalo Borges @ 2015-10-14 23:56 UTC (permalink / raw)
  To: Sage Weil, Dan van der Ster
  Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users,
	ceph-maintainers-Qp0mS5GaXlQ

Hi Sage, Dan...

In our case, we have strongly invested in the testing of CephFS. It 
seems as a good solution to some of the issues we currently experience 
regarding the use cases from our researchers.

While I do not see a problem in deploying Ceph cluster in SL7, I suspect 
that we will need CephFS clients in SL6 for quite some time. The problem 
here is that our researchers use a whole bunch of software provided by 
the CERN experiments to generate MC data or analyse experimental data. 
This software is currently certified for SL6 and I think that a SL7 
version will take a considerable amount of time. So we need a CephFS 
client that allows our researchers to access and analyse the data in 
that environment.

If you guys did not think it was worthwhile the effort to built for 
those flavors, that actually tells me this is a complicated task that, 
most probably, I can not do it on my own.

I am currently interacting with Dan and other colleagues in a CERN 
mailing list. Let us see what would be the outcome of that discussion.

But at the moment I am open to suggestions.

TIA
Goncalo

On 10/14/2015 11:30 PM, Sage Weil wrote:
> On Wed, 14 Oct 2015, Dan van der Ster wrote:
>> Hi Goncalo,
>>
>> On Wed, Oct 14, 2015 at 6:51 AM, Goncalo Borges
>> <goncalo-JAjqph6Yjy/r8pF6qxatHrpzq4S04n8Q@public.gmane.org> wrote:
>>> Hi Sage...
>>>
>>> I've seen that the rh6 derivatives have been ruled out.
>>>
>>> This is a problem in our case since the OS choice in our systems is,
>>> somehow, imposed by CERN. The experiments software is certified for SL6 and
>>> the transition to SL7 will take some time.
>> Are you accessing Ceph directly from "physics" machines? Here at CERN
>> we run CentOS 7 on the native clients (e.g. qemu-kvm hosts) and by the
>> time we upgrade to Infernalis the servers will all be CentOS 7 as
>> well. Batch nodes running SL6 don't (currently) talk to Ceph directly
>> (in the future they might talk to Ceph-based storage via an xroot
>> gateway). But if there are use-cases then perhaps we could find a
>> place to build and distributing the newer ceph clients.
>>
>> There's a ML ceph-talk-vJEk5272eHo@public.gmane.org where we could take this discussion.
>> Mail me if have trouble joining that e-Group.
> Also note that it *is* possible to build infernalis on el6, but it
> requires a lot more effort... enough that we would rather spend our time
> elsewhere (at least as far as ceph.com packages go).  If someone else
> wants to do that work we'd be happy to take patches to update the and/or
> release process.
>
> IIRC the thing that eventually made me stop going down this patch was the
> fact that the newer gcc had a runtime dependency on the newer libstdc++,
> which wasn't part of the base distro... which means we'd need also to
> publish those packages in the ceph.com repos, or users would have to
> add some backport repo or ppa or whatever to get things running.  Bleh.
>
> sage
>
>
>> Cheers, Dan
>> CERN IT-DSS
>>
>>> This is kind of a showstopper specially if we can't deploy clients in SL6 /
>>> Centos6.
>>>
>>> Is there any alternative?
>>>
>>> TIA
>>> Goncalo
>>>
>>>
>>>
>>> On 10/14/2015 08:01 AM, Sage Weil wrote:
>>>> This is the first Infernalis release candidate.  There have been some
>>>> major changes since hammer, and the upgrade process is non-trivial.
>>>> Please read carefully.
>>>>
>>>> Getting the release candidate
>>>> -----------------------------
>>>>
>>>> The v9.1.0 packages are pushed to the development release repositories::
>>>>
>>>>     http://download.ceph.com/rpm-testing
>>>>     http://download.ceph.com/debian-testing
>>>>
>>>> For for info, see::
>>>>
>>>>     http://docs.ceph.com/docs/master/install/get-packages/
>>>>
>>>> Or install with ceph-deploy via::
>>>>
>>>>     ceph-deploy install --testing HOST
>>>>
>>>> Known issues
>>>> ------------
>>>>
>>>> * librbd and librados ABI compatibility is broken.  Be careful
>>>>     installing this RC on client machines (e.g., those running qemu).
>>>>     It will be fixed in the final v9.2.0 release.
>>>>
>>>> Major Changes from Hammer
>>>> -------------------------
>>>>
>>>> * *General*:
>>>>     * Ceph daemons are now managed via systemd (with the exception of
>>>>       Ubuntu Trusty, which still uses upstart).
>>>>     * Ceph daemons run as 'ceph' user instead root.
>>>>     * On Red Hat distros, there is also an SELinux policy.
>>>> * *RADOS*:
>>>>     * The RADOS cache tier can now proxy write operations to the base
>>>>       tier, allowing writes to be handled without forcing migration of
>>>>       an object into the cache.
>>>>     * The SHEC erasure coding support is no longer flagged as
>>>>       experimental. SHEC trades some additional storage space for faster
>>>>       repair.
>>>>     * There is now a unified queue (and thus prioritization) of client
>>>>       IO, recovery, scrubbing, and snapshot trimming.
>>>>     * There have been many improvements to low-level repair tooling
>>>>       (ceph-objectstore-tool).
>>>>     * The internal ObjectStore API has been significantly cleaned up in
>>>> order
>>>>       to faciliate new storage backends like NewStore.
>>>> * *RGW*:
>>>>     * The Swift API now supports object expiration.
>>>>     * There are many Swift API compatibility improvements.
>>>> * *RBD*:
>>>>     * The ``rbd du`` command shows actual usage (quickly, when
>>>>       object-map is enabled).
>>>>     * The object-map feature has seen many stability improvements.
>>>>     * Object-map and exclusive-lock features can be enabled or disabled
>>>>       dynamically.
>>>>     * You can now store user metadata and set persistent librbd options
>>>>       associated with individual images.
>>>>     * The new deep-flatten features allows flattening of a clone and all
>>>>       of its snapshots.  (Previously snapshots could not be flattened.)
>>>>     * The export-diff command command is now faster (it uses aio).  There
>>>> is also
>>>>       a new fast-diff feature.
>>>>     * The --size argument can be specified with a suffix for units
>>>>       (e.g., ``--size 64G``).
>>>>     * There is a new ``rbd status`` command that, for now, shows who has
>>>>       the image open/mapped.
>>>> * *CephFS*:
>>>>     * You can now rename snapshots.
>>>>     * There have been ongoing improvements around administration,
>>>> diagnostics,
>>>>       and the check and repair tools.
>>>>     * The caching and revocation of client cache state due to unused
>>>>       inodes has been dramatically improved.
>>>>     * The ceph-fuse client behaves better on 32-bit hosts.
>>>>
>>>> Distro compatibility
>>>> --------------------
>>>>
>>>> We have decided to drop support for many older distributions so that we
>>>> can
>>>> move to a newer compiler toolchain (e.g., C++11).  Although it is still
>>>> possible
>>>> to build Ceph on older distributions by installing backported development
>>>> tools,
>>>> we are not building and publishing release packages for ceph.com.
>>>>
>>>> In particular,
>>>>
>>>> * CentOS 7 or later; we have dropped support for CentOS 6 (and other
>>>>     RHEL 6 derivatives, like Scientific Linux 6).
>>>> * Debian Jessie 8.x or later; Debian Wheezy 7.x's g++ has incomplete
>>>>     support for C++11 (and no systemd).
>>>> * Ubuntu Trusty 14.04 or later; Ubuntu Precise 12.04 is no longer
>>>>     supported.
>>>> * Fedora 22 or later.
>>>>
>>>> Upgrading from Firefly
>>>> ----------------------
>>>>
>>>> Upgrading directly from Firefly v0.80.z is not possible.  All clusters
>>>> must first upgrade to Hammer v0.94.4 or a later v0.94.z release; only
>>>> then is it possible to upgrade to Infernalis 9.2.z.
>>>>
>>>> Note that v0.94.4 isn't released yet, but you can upgrade to a test build
>>>> from gitbuilder with::
>>>>
>>>>     ceph-deploy install --dev hammer HOST
>>>>
>>>> The v0.94.4 Hammer point release will be out before v9.2.0 Infernalis
>>>> is.
>>>>
>>>> Upgrading from Hammer
>>>> ---------------------
>>>>
>>>> * For all distributions that support systemd (CentOS 7, Fedora, Debian
>>>>     Jessie 8.x, OpenSUSE), ceph daemons are now managed using native
>>>> systemd
>>>>     files instead of the legacy sysvinit scripts.  For example,::
>>>>
>>>>       systemctl start ceph.target       # start all daemons
>>>>       systemctl status ceph-osd@12      # check status of osd.12
>>>>
>>>>     The main notable distro that is *not* yet using systemd is Ubuntu
>>>> trusty
>>>>     14.04.  (The next Ubuntu LTS, 16.04, will use systemd instead of
>>>> upstart.)
>>>>       * Ceph daemons now run as user and group ``ceph`` by default.  The
>>>>     ceph user has a static UID assigned by Fedora and Debian (also used
>>>>     by derivative distributions like RHEL/CentOS and Ubuntu).  On SUSE
>>>>     the ceph user will currently get a dynamically assigned UID when the
>>>>     user is created.
>>>>
>>>>     If your systems already have a ceph user, upgrading the package will
>>>> cause
>>>>     problems.  We suggest you first remove or rename the existing 'ceph'
>>>> user
>>>>     before upgrading.
>>>>
>>>>     When upgrading, administrators have two options:
>>>>
>>>>      #. Add the following line to ``ceph.conf`` on all hosts::
>>>>
>>>>           setuser match path = /var/lib/ceph/$type/$cluster-$id
>>>>
>>>>         This will make the Ceph daemons run as root (i.e., not drop
>>>>         privileges and switch to user ceph) if the daemon's data
>>>>         directory is still owned by root.  Newly deployed daemons will
>>>>         be created with data owned by user ceph and will run with
>>>>         reduced privileges, but upgraded daemons will continue to run as
>>>>         root.
>>>>
>>>>      #. Fix the data ownership during the upgrade.  This is the preferred
>>>> option,
>>>>         but is more work.  The process for each host would be to:
>>>>
>>>>         #. Upgrade the ceph package.  This creates the ceph user and group.
>>>> For
>>>>           example::
>>>>
>>>>             ceph-deploy install --stable infernalis HOST
>>>>
>>>>         #. Stop the daemon(s).::
>>>>
>>>>             service ceph stop           # fedora, centos, rhel, debian
>>>>             stop ceph-all               # ubuntu
>>>>
>>>>         #. Fix the ownership::
>>>>
>>>>             chown -R ceph:ceph /var/lib/ceph
>>>>
>>>>         #. Restart the daemon(s).::
>>>>
>>>>             start ceph-all                # ubuntu
>>>>             systemctl start ceph.target   # debian, centos, fedora, rhel
>>>>
>>>> * The on-disk format for the experimental KeyValueStore OSD backend has
>>>>     changed.  You will need to remove any OSDs using that backend before
>>>> you
>>>>     upgrade any test clusters that use it.
>>>>
>>>> Upgrade notes
>>>> -------------
>>>>
>>>> * When a pool quota is reached, librados operations now block
>>>> indefinitely,
>>>>     the same way they do when the cluster fills up.  (Previously they would
>>>> return
>>>>     -ENOSPC).  By default, a full cluster or pool will now block.  If your
>>>>     librados application can handle ENOSPC or EDQUOT errors gracefully, you
>>>> can
>>>>     get error returns instead by using the new librados OPERATION_FULL_TRY
>>>> flag.
>>>>
>>>> Notable changes
>>>> ---------------
>>>>
>>>> NOTE: These notes are somewhat abbreviated while we find a less
>>>> time-consuming process for generating them.
>>>>
>>>> * build: C++11 now supported
>>>> * build: many cmake improvements
>>>> * build: OSX build fixes (Yan, Zheng)
>>>> * build: remove rest-bench
>>>> * ceph-disk: many fixes (Loic Dachary)
>>>> * ceph-disk: support for multipath devices (Loic Dachary)
>>>> * ceph-fuse: mostly behave on 32-bit hosts (Yan, Zheng)
>>>> * ceph-objectstore-tool: many improvements (David Zafman)
>>>> * common: bufferlist performance tuning (Piotr Dalek, Sage Weil)
>>>> * common: make mutex more efficient
>>>> * common: some async compression infrastructure (Haomai Wang)
>>>> * librados: add FULL_TRY and FULL_FORCE flags for dealing with full
>>>> clusters or pools (Sage Weil)
>>>> * librados: fix notify completion race (#13114 Sage Weil)
>>>> * librados, libcephfs: randomize client nonces (Josh Durgin)
>>>> * librados: pybind: fix binary omap values (Robin H. Johnson)
>>>> * librbd: fix reads larger than the cache size (Lu Shi)
>>>> * librbd: metadata filter fixes (Haomai Wang)
>>>> * librbd: use write_full when possible (Zhiqiang Wang)
>>>> * mds: avoid emitting cap warnigns before evicting session (John Spray)
>>>> * mds: fix expected holes in journal objects (#13167 Yan, Zheng)
>>>> * mds: fix SnapServer crash on deleted pool (John Spray)
>>>> * mds: many fixes (Yan, Zheng, John Spray, Greg Farnum)
>>>> * mon: add cache over MonitorDBStore (Kefu Chai)
>>>> * mon: 'ceph osd metadata' can dump all osds (Haomai Wang)
>>>> * mon: detect kv backend failures (Sage Weil)
>>>> * mon: fix CRUSH map test for new pools (Sage Weil)
>>>> * mon: fix min_last_epoch_clean tracking (Kefu Chai)
>>>> * mon: misc scaling fixes (Sage Weil)
>>>> * mon: streamline session handling, fix memory leaks (Sage Weil)
>>>> * mon: upgrades must pass through hammer (Sage Weil)
>>>> * msg/async: many fixes (Haomai Wang)
>>>> * osd: cache proxy-write support (Zhiqiang Wang, Samuel Just)
>>>> * osd: configure promotion based on write recency (Zhiqiang Wang)
>>>> * osd: don't send dup MMonGetOSDMap requests (Sage Weil, Kefu Chai)
>>>> * osd: erasure-code: fix SHEC floating point bug (#12936 Loic Dachary)
>>>> * osd: erasure-code: update to ISA-L 2.14 (Yuan Zhou)
>>>> * osd: fix hitset object naming to use GMT (Kefu Chai)
>>>> * osd: fix misc memory leaks (Sage Weil)
>>>> * osd: fix peek_queue locking in FileStore (Xinze Chi)
>>>> * osd: fix promotion vs full cache tier (Samuel Just)
>>>> * osd: fix replay requeue when pg is still activating (#13116 Samuel Just)
>>>> * osd: fix scrub stat bugs (Sage Weil, Samuel Just)
>>>> * osd: force promotion for ops EC can't handle (Zhiqiang Wang)
>>>> * osd: improve behavior on machines with large memory pages (Steve Capper)
>>>> * osd: merge multiple setattr calls into a setattrs call (Xinxin Shu)
>>>> * osd: newstore prototype (Sage Weil)
>>>> * osd: ObjectStore internal API refactor (Sage Weil)
>>>> * osd: SHEC no longer experimental
>>>> * osd: throttle evict ops (Yunchuan Wen)
>>>> * osd: upgrades must pass through hammer (Sage Weil)
>>>> * osd: use SEEK_HOLE / SEEK_DATA for sparse copy (Xinxin Shu)
>>>> * rbd: rbd-replay-prep and rbd-replay improvements (Jason Dillaman)
>>>> * rgw: expose the number of unhealthy workers through admin socket (Guang
>>>> Yang)
>>>> * rgw: fix casing of Content-Type header (Robin H. Johnson)
>>>> * rgw: fix decoding of X-Object-Manifest from GET on Swift DLO (Radslow
>>>> Rzarzynski)
>>>> * rgw: fix sysvinit script
>>>> * rgw: fix sysvinit script w/ multiple instances (Sage Weil, Pavan
>>>> Rallabhandi)
>>>> * rgw: improve handling of already removed buckets in expirer (Radoslaw
>>>> Rzarzynski)
>>>> * rgw: log to /var/log/ceph instead of /var/log/radosgw
>>>> * rgw: rework X-Trans-Id header to be conform with Swift API (Radoslaw
>>>> Rzarzynski)
>>>> * rgw: s3 encoding-type for get bucket (Jeff Weber)
>>>> * rgw: set max buckets per user in ceph.conf (Vikhyat Umrao)
>>>> * rgw: support for Swift expiration API (Radoslaw Rzarzynski, Yehuda
>>>> Sadeh)
>>>> * rgw: user rm is idempotent (Orit Wasserman)
>>>> * selinux policy (Boris Ranto, Milan Broz)
>>>> * systemd: many fixes (Sage Weil, Owen Synge, Boris Ranto, Dan van der
>>>> Ster)
>>>> * systemd: run daemons as user ceph
>>>>
>>>> Getting Ceph
>>>> ------------
>>>>
>>>> * Git at git://github.com/ceph/ceph.git
>>>> * Tarball at http://download.ceph.com/tarballs/ceph-9.1.0.tar.gz
>>>> * For packages, see http://ceph.com/docs/master/install/get-packages
>>>> * For ceph-deploy, see
>>>> http://ceph.com/docs/master/install/install-ceph-deploy
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>> --
>>> Goncalo Borges
>>> Research Computing
>>> ARC Centre of Excellence for Particle Physics at the Terascale
>>> School of Physics A28 | University of Sydney, NSW  2006
>>> T: +61 2 93511937
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>>

-- 
Goncalo Borges
Research Computing
ARC Centre of Excellence for Particle Physics at the Terascale
School of Physics A28 | University of Sydney, NSW  2006
T: +61 2 93511937

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: v9.1.0 Infernalis release candidate released
       [not found]             ` <561EEBAA.6070707-JAjqph6Yjy/r8pF6qxatHrpzq4S04n8Q@public.gmane.org>
@ 2015-10-15  1:17               ` Sage Weil
  0 siblings, 0 replies; 16+ messages in thread
From: Sage Weil @ 2015-10-15  1:17 UTC (permalink / raw)
  To: Goncalo Borges; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users

On Thu, 15 Oct 2015, Goncalo Borges wrote:
> Hi Sage, Dan...
> 
> In our case, we have strongly invested in the testing of CephFS. It seems as a
> good solution to some of the issues we currently experience regarding the use
> cases from our researchers.
> 
> While I do not see a problem in deploying Ceph cluster in SL7, I suspect that
> we will need CephFS clients in SL6 for quite some time. The problem here is
> that our researchers use a whole bunch of software provided by the CERN
> experiments to generate MC data or analyse experimental data. This software is
> currently certified for SL6 and I think that a SL7 version will take a
> considerable amount of time. So we need a CephFS client that allows our
> researchers to access and analyse the data in that environment.
> 
> If you guys did not think it was worthwhile the effort to built for those
> flavors, that actually tells me this is a complicated task that, most
> probably, I can not do it on my own.

I don't think it will be much of a problem.

First, if you're using the CephFS kernel client, the important bit is the 
kernel--you'll want something quite recent.  The OS doesn't really matter 
much.  The only piece that is of any use is mount.ceph, but it is 
optional.  It only does two semi-useful things: it resolves DNS if you 
identify your monitor(s) with something other than an IP (and actually the 
kernel can do this too if it's built with the right options) and it will 
turn a '-o secretfile=<file_containing_the_secret>' into a '-o 
secret=<secret>'.  In other words, it's optional, although it makes it 
slightly awkward not to put the ceph key in /etc/fstab.  In any case, it's 
trivial to build that binary and install/distirbute it in some other 
manner.

Or, you can build the ceph packages with the newer gcc.. it isn't 
that painful.  I stopped because I didn't want to have us distributing 
newer versions of the libstdc++ libraries in the ceph repositories.

If you're talking about using libcephfs or ceph-fuse, then building those 
packages is inevitable... but probably not that onerous.

sage



> 
> I am currently interacting with Dan and other colleagues in a CERN mailing
> list. Let us see what would be the outcome of that discussion.
> 
> But at the moment I am open to suggestions.
> 
> TIA
> Goncalo
> 
> On 10/14/2015 11:30 PM, Sage Weil wrote:
> > On Wed, 14 Oct 2015, Dan van der Ster wrote:
> > > Hi Goncalo,
> > > 
> > > On Wed, Oct 14, 2015 at 6:51 AM, Goncalo Borges
> > > <goncalo-JAjqph6Yjy/r8pF6qxatHrpzq4S04n8Q@public.gmane.org> wrote:
> > > > Hi Sage...
> > > > 
> > > > I've seen that the rh6 derivatives have been ruled out.
> > > > 
> > > > This is a problem in our case since the OS choice in our systems is,
> > > > somehow, imposed by CERN. The experiments software is certified for SL6
> > > > and
> > > > the transition to SL7 will take some time.
> > > Are you accessing Ceph directly from "physics" machines? Here at CERN
> > > we run CentOS 7 on the native clients (e.g. qemu-kvm hosts) and by the
> > > time we upgrade to Infernalis the servers will all be CentOS 7 as
> > > well. Batch nodes running SL6 don't (currently) talk to Ceph directly
> > > (in the future they might talk to Ceph-based storage via an xroot
> > > gateway). But if there are use-cases then perhaps we could find a
> > > place to build and distributing the newer ceph clients.
> > > 
> > > There's a ML ceph-talk-vJEk5272eHo@public.gmane.org where we could take this discussion.
> > > Mail me if have trouble joining that e-Group.
> > Also note that it *is* possible to build infernalis on el6, but it
> > requires a lot more effort... enough that we would rather spend our time
> > elsewhere (at least as far as ceph.com packages go).  If someone else
> > wants to do that work we'd be happy to take patches to update the and/or
> > release process.
> > 
> > IIRC the thing that eventually made me stop going down this patch was the
> > fact that the newer gcc had a runtime dependency on the newer libstdc++,
> > which wasn't part of the base distro... which means we'd need also to
> > publish those packages in the ceph.com repos, or users would have to
> > add some backport repo or ppa or whatever to get things running.  Bleh.
> > 
> > sage
> > 
> > 
> > > Cheers, Dan
> > > CERN IT-DSS
> > > 
> > > > This is kind of a showstopper specially if we can't deploy clients in
> > > > SL6 /
> > > > Centos6.
> > > > 
> > > > Is there any alternative?
> > > > 
> > > > TIA
> > > > Goncalo
> > > > 
> > > > 
> > > > 
> > > > On 10/14/2015 08:01 AM, Sage Weil wrote:
> > > > > This is the first Infernalis release candidate.  There have been some
> > > > > major changes since hammer, and the upgrade process is non-trivial.
> > > > > Please read carefully.
> > > > > 
> > > > > Getting the release candidate
> > > > > -----------------------------
> > > > > 
> > > > > The v9.1.0 packages are pushed to the development release
> > > > > repositories::
> > > > > 
> > > > >     http://download.ceph.com/rpm-testing
> > > > >     http://download.ceph.com/debian-testing
> > > > > 
> > > > > For for info, see::
> > > > > 
> > > > >     http://docs.ceph.com/docs/master/install/get-packages/
> > > > > 
> > > > > Or install with ceph-deploy via::
> > > > > 
> > > > >     ceph-deploy install --testing HOST
> > > > > 
> > > > > Known issues
> > > > > ------------
> > > > > 
> > > > > * librbd and librados ABI compatibility is broken.  Be careful
> > > > >     installing this RC on client machines (e.g., those running qemu).
> > > > >     It will be fixed in the final v9.2.0 release.
> > > > > 
> > > > > Major Changes from Hammer
> > > > > -------------------------
> > > > > 
> > > > > * *General*:
> > > > >     * Ceph daemons are now managed via systemd (with the exception of
> > > > >       Ubuntu Trusty, which still uses upstart).
> > > > >     * Ceph daemons run as 'ceph' user instead root.
> > > > >     * On Red Hat distros, there is also an SELinux policy.
> > > > > * *RADOS*:
> > > > >     * The RADOS cache tier can now proxy write operations to the base
> > > > >       tier, allowing writes to be handled without forcing migration of
> > > > >       an object into the cache.
> > > > >     * The SHEC erasure coding support is no longer flagged as
> > > > >       experimental. SHEC trades some additional storage space for
> > > > > faster
> > > > >       repair.
> > > > >     * There is now a unified queue (and thus prioritization) of client
> > > > >       IO, recovery, scrubbing, and snapshot trimming.
> > > > >     * There have been many improvements to low-level repair tooling
> > > > >       (ceph-objectstore-tool).
> > > > >     * The internal ObjectStore API has been significantly cleaned up
> > > > > in
> > > > > order
> > > > >       to faciliate new storage backends like NewStore.
> > > > > * *RGW*:
> > > > >     * The Swift API now supports object expiration.
> > > > >     * There are many Swift API compatibility improvements.
> > > > > * *RBD*:
> > > > >     * The ``rbd du`` command shows actual usage (quickly, when
> > > > >       object-map is enabled).
> > > > >     * The object-map feature has seen many stability improvements.
> > > > >     * Object-map and exclusive-lock features can be enabled or
> > > > > disabled
> > > > >       dynamically.
> > > > >     * You can now store user metadata and set persistent librbd
> > > > > options
> > > > >       associated with individual images.
> > > > >     * The new deep-flatten features allows flattening of a clone and
> > > > > all
> > > > >       of its snapshots.  (Previously snapshots could not be
> > > > > flattened.)
> > > > >     * The export-diff command command is now faster (it uses aio).
> > > > > There
> > > > > is also
> > > > >       a new fast-diff feature.
> > > > >     * The --size argument can be specified with a suffix for units
> > > > >       (e.g., ``--size 64G``).
> > > > >     * There is a new ``rbd status`` command that, for now, shows who
> > > > > has
> > > > >       the image open/mapped.
> > > > > * *CephFS*:
> > > > >     * You can now rename snapshots.
> > > > >     * There have been ongoing improvements around administration,
> > > > > diagnostics,
> > > > >       and the check and repair tools.
> > > > >     * The caching and revocation of client cache state due to unused
> > > > >       inodes has been dramatically improved.
> > > > >     * The ceph-fuse client behaves better on 32-bit hosts.
> > > > > 
> > > > > Distro compatibility
> > > > > --------------------
> > > > > 
> > > > > We have decided to drop support for many older distributions so that
> > > > > we
> > > > > can
> > > > > move to a newer compiler toolchain (e.g., C++11).  Although it is
> > > > > still
> > > > > possible
> > > > > to build Ceph on older distributions by installing backported
> > > > > development
> > > > > tools,
> > > > > we are not building and publishing release packages for ceph.com.
> > > > > 
> > > > > In particular,
> > > > > 
> > > > > * CentOS 7 or later; we have dropped support for CentOS 6 (and other
> > > > >     RHEL 6 derivatives, like Scientific Linux 6).
> > > > > * Debian Jessie 8.x or later; Debian Wheezy 7.x's g++ has incomplete
> > > > >     support for C++11 (and no systemd).
> > > > > * Ubuntu Trusty 14.04 or later; Ubuntu Precise 12.04 is no longer
> > > > >     supported.
> > > > > * Fedora 22 or later.
> > > > > 
> > > > > Upgrading from Firefly
> > > > > ----------------------
> > > > > 
> > > > > Upgrading directly from Firefly v0.80.z is not possible.  All clusters
> > > > > must first upgrade to Hammer v0.94.4 or a later v0.94.z release; only
> > > > > then is it possible to upgrade to Infernalis 9.2.z.
> > > > > 
> > > > > Note that v0.94.4 isn't released yet, but you can upgrade to a test
> > > > > build
> > > > > from gitbuilder with::
> > > > > 
> > > > >     ceph-deploy install --dev hammer HOST
> > > > > 
> > > > > The v0.94.4 Hammer point release will be out before v9.2.0 Infernalis
> > > > > is.
> > > > > 
> > > > > Upgrading from Hammer
> > > > > ---------------------
> > > > > 
> > > > > * For all distributions that support systemd (CentOS 7, Fedora, Debian
> > > > >     Jessie 8.x, OpenSUSE), ceph daemons are now managed using native
> > > > > systemd
> > > > >     files instead of the legacy sysvinit scripts.  For example,::
> > > > > 
> > > > >       systemctl start ceph.target       # start all daemons
> > > > >       systemctl status ceph-osd@12      # check status of osd.12
> > > > > 
> > > > >     The main notable distro that is *not* yet using systemd is Ubuntu
> > > > > trusty
> > > > >     14.04.  (The next Ubuntu LTS, 16.04, will use systemd instead of
> > > > > upstart.)
> > > > >       * Ceph daemons now run as user and group ``ceph`` by default.
> > > > > The
> > > > >     ceph user has a static UID assigned by Fedora and Debian (also
> > > > > used
> > > > >     by derivative distributions like RHEL/CentOS and Ubuntu).  On SUSE
> > > > >     the ceph user will currently get a dynamically assigned UID when
> > > > > the
> > > > >     user is created.
> > > > > 
> > > > >     If your systems already have a ceph user, upgrading the package
> > > > > will
> > > > > cause
> > > > >     problems.  We suggest you first remove or rename the existing
> > > > > 'ceph'
> > > > > user
> > > > >     before upgrading.
> > > > > 
> > > > >     When upgrading, administrators have two options:
> > > > > 
> > > > >      #. Add the following line to ``ceph.conf`` on all hosts::
> > > > > 
> > > > >           setuser match path = /var/lib/ceph/$type/$cluster-$id
> > > > > 
> > > > >         This will make the Ceph daemons run as root (i.e., not drop
> > > > >         privileges and switch to user ceph) if the daemon's data
> > > > >         directory is still owned by root.  Newly deployed daemons will
> > > > >         be created with data owned by user ceph and will run with
> > > > >         reduced privileges, but upgraded daemons will continue to run
> > > > > as
> > > > >         root.
> > > > > 
> > > > >      #. Fix the data ownership during the upgrade.  This is the
> > > > > preferred
> > > > > option,
> > > > >         but is more work.  The process for each host would be to:
> > > > > 
> > > > >         #. Upgrade the ceph package.  This creates the ceph user and
> > > > > group.
> > > > > For
> > > > >           example::
> > > > > 
> > > > >             ceph-deploy install --stable infernalis HOST
> > > > > 
> > > > >         #. Stop the daemon(s).::
> > > > > 
> > > > >             service ceph stop           # fedora, centos, rhel, debian
> > > > >             stop ceph-all               # ubuntu
> > > > > 
> > > > >         #. Fix the ownership::
> > > > > 
> > > > >             chown -R ceph:ceph /var/lib/ceph
> > > > > 
> > > > >         #. Restart the daemon(s).::
> > > > > 
> > > > >             start ceph-all                # ubuntu
> > > > >             systemctl start ceph.target   # debian, centos, fedora,
> > > > > rhel
> > > > > 
> > > > > * The on-disk format for the experimental KeyValueStore OSD backend
> > > > > has
> > > > >     changed.  You will need to remove any OSDs using that backend
> > > > > before
> > > > > you
> > > > >     upgrade any test clusters that use it.
> > > > > 
> > > > > Upgrade notes
> > > > > -------------
> > > > > 
> > > > > * When a pool quota is reached, librados operations now block
> > > > > indefinitely,
> > > > >     the same way they do when the cluster fills up.  (Previously they
> > > > > would
> > > > > return
> > > > >     -ENOSPC).  By default, a full cluster or pool will now block.  If
> > > > > your
> > > > >     librados application can handle ENOSPC or EDQUOT errors
> > > > > gracefully, you
> > > > > can
> > > > >     get error returns instead by using the new librados
> > > > > OPERATION_FULL_TRY
> > > > > flag.
> > > > > 
> > > > > Notable changes
> > > > > ---------------
> > > > > 
> > > > > NOTE: These notes are somewhat abbreviated while we find a less
> > > > > time-consuming process for generating them.
> > > > > 
> > > > > * build: C++11 now supported
> > > > > * build: many cmake improvements
> > > > > * build: OSX build fixes (Yan, Zheng)
> > > > > * build: remove rest-bench
> > > > > * ceph-disk: many fixes (Loic Dachary)
> > > > > * ceph-disk: support for multipath devices (Loic Dachary)
> > > > > * ceph-fuse: mostly behave on 32-bit hosts (Yan, Zheng)
> > > > > * ceph-objectstore-tool: many improvements (David Zafman)
> > > > > * common: bufferlist performance tuning (Piotr Dalek, Sage Weil)
> > > > > * common: make mutex more efficient
> > > > > * common: some async compression infrastructure (Haomai Wang)
> > > > > * librados: add FULL_TRY and FULL_FORCE flags for dealing with full
> > > > > clusters or pools (Sage Weil)
> > > > > * librados: fix notify completion race (#13114 Sage Weil)
> > > > > * librados, libcephfs: randomize client nonces (Josh Durgin)
> > > > > * librados: pybind: fix binary omap values (Robin H. Johnson)
> > > > > * librbd: fix reads larger than the cache size (Lu Shi)
> > > > > * librbd: metadata filter fixes (Haomai Wang)
> > > > > * librbd: use write_full when possible (Zhiqiang Wang)
> > > > > * mds: avoid emitting cap warnigns before evicting session (John
> > > > > Spray)
> > > > > * mds: fix expected holes in journal objects (#13167 Yan, Zheng)
> > > > > * mds: fix SnapServer crash on deleted pool (John Spray)
> > > > > * mds: many fixes (Yan, Zheng, John Spray, Greg Farnum)
> > > > > * mon: add cache over MonitorDBStore (Kefu Chai)
> > > > > * mon: 'ceph osd metadata' can dump all osds (Haomai Wang)
> > > > > * mon: detect kv backend failures (Sage Weil)
> > > > > * mon: fix CRUSH map test for new pools (Sage Weil)
> > > > > * mon: fix min_last_epoch_clean tracking (Kefu Chai)
> > > > > * mon: misc scaling fixes (Sage Weil)
> > > > > * mon: streamline session handling, fix memory leaks (Sage Weil)
> > > > > * mon: upgrades must pass through hammer (Sage Weil)
> > > > > * msg/async: many fixes (Haomai Wang)
> > > > > * osd: cache proxy-write support (Zhiqiang Wang, Samuel Just)
> > > > > * osd: configure promotion based on write recency (Zhiqiang Wang)
> > > > > * osd: don't send dup MMonGetOSDMap requests (Sage Weil, Kefu Chai)
> > > > > * osd: erasure-code: fix SHEC floating point bug (#12936 Loic Dachary)
> > > > > * osd: erasure-code: update to ISA-L 2.14 (Yuan Zhou)
> > > > > * osd: fix hitset object naming to use GMT (Kefu Chai)
> > > > > * osd: fix misc memory leaks (Sage Weil)
> > > > > * osd: fix peek_queue locking in FileStore (Xinze Chi)
> > > > > * osd: fix promotion vs full cache tier (Samuel Just)
> > > > > * osd: fix replay requeue when pg is still activating (#13116 Samuel
> > > > > Just)
> > > > > * osd: fix scrub stat bugs (Sage Weil, Samuel Just)
> > > > > * osd: force promotion for ops EC can't handle (Zhiqiang Wang)
> > > > > * osd: improve behavior on machines with large memory pages (Steve
> > > > > Capper)
> > > > > * osd: merge multiple setattr calls into a setattrs call (Xinxin Shu)
> > > > > * osd: newstore prototype (Sage Weil)
> > > > > * osd: ObjectStore internal API refactor (Sage Weil)
> > > > > * osd: SHEC no longer experimental
> > > > > * osd: throttle evict ops (Yunchuan Wen)
> > > > > * osd: upgrades must pass through hammer (Sage Weil)
> > > > > * osd: use SEEK_HOLE / SEEK_DATA for sparse copy (Xinxin Shu)
> > > > > * rbd: rbd-replay-prep and rbd-replay improvements (Jason Dillaman)
> > > > > * rgw: expose the number of unhealthy workers through admin socket
> > > > > (Guang
> > > > > Yang)
> > > > > * rgw: fix casing of Content-Type header (Robin H. Johnson)
> > > > > * rgw: fix decoding of X-Object-Manifest from GET on Swift DLO
> > > > > (Radslow
> > > > > Rzarzynski)
> > > > > * rgw: fix sysvinit script
> > > > > * rgw: fix sysvinit script w/ multiple instances (Sage Weil, Pavan
> > > > > Rallabhandi)
> > > > > * rgw: improve handling of already removed buckets in expirer
> > > > > (Radoslaw
> > > > > Rzarzynski)
> > > > > * rgw: log to /var/log/ceph instead of /var/log/radosgw
> > > > > * rgw: rework X-Trans-Id header to be conform with Swift API (Radoslaw
> > > > > Rzarzynski)
> > > > > * rgw: s3 encoding-type for get bucket (Jeff Weber)
> > > > > * rgw: set max buckets per user in ceph.conf (Vikhyat Umrao)
> > > > > * rgw: support for Swift expiration API (Radoslaw Rzarzynski, Yehuda
> > > > > Sadeh)
> > > > > * rgw: user rm is idempotent (Orit Wasserman)
> > > > > * selinux policy (Boris Ranto, Milan Broz)
> > > > > * systemd: many fixes (Sage Weil, Owen Synge, Boris Ranto, Dan van der
> > > > > Ster)
> > > > > * systemd: run daemons as user ceph
> > > > > 
> > > > > Getting Ceph
> > > > > ------------
> > > > > 
> > > > > * Git at git://github.com/ceph/ceph.git
> > > > > * Tarball at http://download.ceph.com/tarballs/ceph-9.1.0.tar.gz
> > > > > * For packages, see http://ceph.com/docs/master/install/get-packages
> > > > > * For ceph-deploy, see
> > > > > http://ceph.com/docs/master/install/install-ceph-deploy
> > > > > _______________________________________________
> > > > > ceph-users mailing list
> > > > > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> > > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > > 
> > > > --
> > > > Goncalo Borges
> > > > Research Computing
> > > > ARC Centre of Excellence for Particle Physics at the Terascale
> > > > School of Physics A28 | University of Sydney, NSW  2006
> > > > T: +61 2 93511937
> > > > 
> > > > 
> > > > _______________________________________________
> > > > ceph-users mailing list
> > > > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > 
> > > 
> 
> -- 
> Goncalo Borges
> Research Computing
> ARC Centre of Excellence for Particle Physics at the Terascale
> School of Physics A28 | University of Sydney, NSW  2006
> T: +61 2 93511937
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2015-10-15  1:17 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-13 21:01 v9.1.0 Infernalis release candidate released Sage Weil
2015-10-14  0:32 ` Joao Eduardo Luis
     [not found] ` <alpine.DEB.2.00.1510131356001.27022-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
2015-10-14  4:51   ` Goncalo Borges
2015-10-14  8:11     ` [ceph-users] " Dan van der Ster
2015-10-14 12:30       ` Sage Weil
     [not found]         ` <alpine.DEB.2.00.1510140527120.6589-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
2015-10-14 23:56           ` Goncalo Borges
     [not found]             ` <561EEBAA.6070707-JAjqph6Yjy/r8pF6qxatHrpzq4S04n8Q@public.gmane.org>
2015-10-15  1:17               ` Sage Weil
2015-10-14  8:06 ` [Ceph-announce] " Gaudenz Steinlin
2015-10-14 12:37   ` Sage Weil
2015-10-14 17:24 ` Deneau, Tom
2015-10-14 17:39   ` Sage Weil
2015-10-14 20:39     ` Deneau, Tom
2015-10-14 20:59       ` Sage Weil
2015-10-14 21:23         ` Deneau, Tom
2015-10-14 21:29           ` Sage Weil
2015-10-14 22:08             ` Deneau, Tom

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.