All of lore.kernel.org
 help / color / mirror / Atom feed
* would people mind a slow osd restart during luminous upgrade?
@ 2017-02-09  3:09 Sage Weil
       [not found] ` <alpine.DEB.2.11.1702090302470.7782-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Sage Weil @ 2017-02-09  3:09 UTC (permalink / raw)
  To: ceph-users-Qp0mS5GaXlQ, ceph-devel-u79uwXL29TY76Z2rM5mHXA

Hello, ceph operators...

Several times in the past we've had to do some ondisk format conversion 
during upgrade which mean that the first time the ceph-osd daemon started 
after upgrade it had to spend a few minutes fixing up it's ondisk files.  
We haven't had to recently, though, and generally try to avoid such 
things.

However, there's a change we'd like to make in FileStore for luminous (*) 
and it would save us a lot of time and complexity if it was a one-shot 
update during the upgrade.  I would probably take in the neighborhood of 
1-5 minutes for a 4-6TB HDD.  That means that when restarting the daemon 
during the upgrade the OSD would stay down for that period (vs the usual 
<1 restart time).

Does this concern anyone?  It probably means the upgrades will take longer 
if you're going host by host since the time per host will go up.

sage


* eliminate 'snapdir' objects, replacing them with a head object + 
whiteout.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: would people mind a slow osd restart during luminous upgrade?
       [not found] ` <alpine.DEB.2.11.1702090302470.7782-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
@ 2017-02-09  4:19   ` David Turner
       [not found]     ` <CAN-Gep+5wDxpab3b5AN_i1m2uNHVvhWkjMWjmd7Rmi6Xbnciuw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2017-02-09  5:57   ` Wido den Hollander
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: David Turner @ 2017-02-09  4:19 UTC (permalink / raw)
  To: Sage Weil, ceph-users-Qp0mS5GaXlQ, ceph-devel-u79uwXL29TY76Z2rM5mHXA


[-- Attachment #1.1: Type: text/plain, Size: 1797 bytes --]

The only issue I can think of is if there isn't a version of the clients
fully tested to work with a partially upgraded cluster or a documented
incompatibility requiring downtime. We've had upgrades where we had to
upgrade clients first and others that we had to do the clients last due to
issues with how the clients interacted with an older cluster, partially
upgraded cluster, or newer cluster.

If the FileStore is changing this much, I can imagine a Jewel client having
a hard time locating the objects it needs from a Luminous cluster.
On Wed, Feb 8, 2017 at 8:09 PM Sage Weil <sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

> Hello, ceph operators...
>
> Several times in the past we've had to do some ondisk format conversion
> during upgrade which mean that the first time the ceph-osd daemon started
> after upgrade it had to spend a few minutes fixing up it's ondisk files.
> We haven't had to recently, though, and generally try to avoid such
> things.
>
> However, there's a change we'd like to make in FileStore for luminous (*)
> and it would save us a lot of time and complexity if it was a one-shot
> update during the upgrade.  I would probably take in the neighborhood of
> 1-5 minutes for a 4-6TB HDD.  That means that when restarting the daemon
> during the upgrade the OSD would stay down for that period (vs the usual
> <1 restart time).
>
> Does this concern anyone?  It probably means the upgrades will take longer
> if you're going host by host since the time per host will go up.
>
> sage
>
>
> * eliminate 'snapdir' objects, replacing them with a head object +
> whiteout.
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

[-- Attachment #1.2: Type: text/html, Size: 2864 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: would people mind a slow osd restart during luminous upgrade?
       [not found] ` <alpine.DEB.2.11.1702090302470.7782-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
  2017-02-09  4:19   ` David Turner
@ 2017-02-09  5:57   ` Wido den Hollander
  2017-02-09  8:41   ` Henrik Korkuc
  2017-02-09 13:30   ` George Mihaiescu
  3 siblings, 0 replies; 10+ messages in thread
From: Wido den Hollander @ 2017-02-09  5:57 UTC (permalink / raw)
  To: ceph-users-Qp0mS5GaXlQ, Sage Weil, ceph-devel-u79uwXL29TY76Z2rM5mHXA


> Op 9 februari 2017 om 4:09 schreef Sage Weil <sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>:
> 
> 
> Hello, ceph operators...
> 
> Several times in the past we've had to do some ondisk format conversion 
> during upgrade which mean that the first time the ceph-osd daemon started 
> after upgrade it had to spend a few minutes fixing up it's ondisk files.  
> We haven't had to recently, though, and generally try to avoid such 
> things.
> 
> However, there's a change we'd like to make in FileStore for luminous (*) 
> and it would save us a lot of time and complexity if it was a one-shot 
> update during the upgrade.  I would probably take in the neighborhood of 
> 1-5 minutes for a 4-6TB HDD.  That means that when restarting the daemon 
> during the upgrade the OSD would stay down for that period (vs the usual 
> <1 restart time).
> 
> Does this concern anyone?  It probably means the upgrades will take longer 
> if you're going host by host since the time per host will go up.
> 

Not really. When going to Jewel data had to be chowned to ceph:ceph as well. As long as we make sure it's in the Release Notes very clearly we should be OK.

Wido

> sage
> 
> 
> * eliminate 'snapdir' objects, replacing them with a head object + 
> whiteout.
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: would people mind a slow osd restart during luminous upgrade?
       [not found] ` <alpine.DEB.2.11.1702090302470.7782-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
  2017-02-09  4:19   ` David Turner
  2017-02-09  5:57   ` Wido den Hollander
@ 2017-02-09  8:41   ` Henrik Korkuc
       [not found]     ` <24cb8bc3-bda5-467c-b2cb-e9d77ba24679-eal8BvkdEKCHXe+LvDLADg@public.gmane.org>
  2017-02-09 13:30   ` George Mihaiescu
  3 siblings, 1 reply; 10+ messages in thread
From: Henrik Korkuc @ 2017-02-09  8:41 UTC (permalink / raw)
  To: Sage Weil, ceph-users-Qp0mS5GaXlQ, ceph-devel-u79uwXL29TY76Z2rM5mHXA

On 17-02-09 05:09, Sage Weil wrote:
> Hello, ceph operators...
>
> Several times in the past we've had to do some ondisk format conversion
> during upgrade which mean that the first time the ceph-osd daemon started
> after upgrade it had to spend a few minutes fixing up it's ondisk files.
> We haven't had to recently, though, and generally try to avoid such
> things.
>
> However, there's a change we'd like to make in FileStore for luminous (*)
> and it would save us a lot of time and complexity if it was a one-shot
> update during the upgrade.  I would probably take in the neighborhood of
> 1-5 minutes for a 4-6TB HDD.  That means that when restarting the daemon
> during the upgrade the OSD would stay down for that period (vs the usual
> <1 restart time).
>
> Does this concern anyone?  It probably means the upgrades will take longer
> if you're going host by host since the time per host will go up.
In my opinion if this is clearly communicated (release notes + OSD logs) 
it's fine otherwise it may feel that something is wrong if OSD will take 
long time to start.

> sage
>
>
> * eliminate 'snapdir' objects, replacing them with a head object +
> whiteout.
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: would people mind a slow osd restart during luminous upgrade?
       [not found]     ` <24cb8bc3-bda5-467c-b2cb-e9d77ba24679-eal8BvkdEKCHXe+LvDLADg@public.gmane.org>
@ 2017-02-09 11:36       ` Dave Holland
  0 siblings, 0 replies; 10+ messages in thread
From: Dave Holland @ 2017-02-09 11:36 UTC (permalink / raw)
  To: Henrik Korkuc
  Cc: Sage Weil, ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-Qp0mS5GaXlQ

On Thu, Feb 09, 2017 at 10:41:44AM +0200, Henrik Korkuc wrote:
> On 17-02-09 05:09, Sage Weil wrote:
> >Does this concern anyone?  It probably means the upgrades will take longer
> >if you're going host by host since the time per host will go up.
> In my opinion if this is clearly communicated (release notes + OSD logs)

+1 for having the OSD log something when it starts the upgrade
process, so the sysadmin who goes looking will see what's happening.

Cheers,
Dave
-- 
** Dave Holland ** Systems Support -- Informatics Systems Group **
** 01223 496923 ** The Sanger Institute, Hinxton, Cambridge, UK **


-- 
 The Wellcome Trust Sanger Institute is operated by Genome Research 
 Limited, a charity registered in England with number 1021457 and a 
 company registered in England with number 2742969, whose registered 
 office is 215 Euston Road, London, NW1 2BE. 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: would people mind a slow osd restart during luminous upgrade?
       [not found] ` <alpine.DEB.2.11.1702090302470.7782-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
                     ` (2 preceding siblings ...)
  2017-02-09  8:41   ` Henrik Korkuc
@ 2017-02-09 13:30   ` George Mihaiescu
  2017-02-09 14:25     ` [ceph-users] " Sage Weil
  3 siblings, 1 reply; 10+ messages in thread
From: George Mihaiescu @ 2017-02-09 13:30 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-Qp0mS5GaXlQ

Hi Sage,

Is the update running in parallel for all OSDs being restarted? 

Because 5 min per server is different than 150 min when there are 30 OSDs there..

Thank you,
George 

> On Feb 8, 2017, at 22:09, Sage Weil <sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> 
> Hello, ceph operators...
> 
> Several times in the past we've had to do some ondisk format conversion 
> during upgrade which mean that the first time the ceph-osd daemon started 
> after upgrade it had to spend a few minutes fixing up it's ondisk files.  
> We haven't had to recently, though, and generally try to avoid such 
> things.
> 
> However, there's a change we'd like to make in FileStore for luminous (*) 
> and it would save us a lot of time and complexity if it was a one-shot 
> update during the upgrade.  I would probably take in the neighborhood of 
> 1-5 minutes for a 4-6TB HDD.  That means that when restarting the daemon 
> during the upgrade the OSD would stay down for that period (vs the usual 
> <1 restart time).
> 
> Does this concern anyone?  It probably means the upgrades will take longer 
> if you're going host by host since the time per host will go up.
> 
> sage
> 
> 
> * eliminate 'snapdir' objects, replacing them with a head object + 
> whiteout.
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [ceph-users] would people mind a slow osd restart during luminous upgrade?
  2017-02-09 13:30   ` George Mihaiescu
@ 2017-02-09 14:25     ` Sage Weil
  0 siblings, 0 replies; 10+ messages in thread
From: Sage Weil @ 2017-02-09 14:25 UTC (permalink / raw)
  To: George Mihaiescu; +Cc: ceph-users, ceph-devel

On Thu, 9 Feb 2017, George Mihaiescu wrote:
> Hi Sage,
> 
> Is the update running in parallel for all OSDs being restarted? 
> 
> Because 5 min per server is different than 150 min when there are 30 
> OSDs there..

In parallel.

sage

> 
> Thank you,
> George 
> 
> > On Feb 8, 2017, at 22:09, Sage Weil <sweil@redhat.com> wrote:
> > 
> > Hello, ceph operators...
> > 
> > Several times in the past we've had to do some ondisk format conversion 
> > during upgrade which mean that the first time the ceph-osd daemon started 
> > after upgrade it had to spend a few minutes fixing up it's ondisk files.  
> > We haven't had to recently, though, and generally try to avoid such 
> > things.
> > 
> > However, there's a change we'd like to make in FileStore for luminous (*) 
> > and it would save us a lot of time and complexity if it was a one-shot 
> > update during the upgrade.  I would probably take in the neighborhood of 
> > 1-5 minutes for a 4-6TB HDD.  That means that when restarting the daemon 
> > during the upgrade the OSD would stay down for that period (vs the usual 
> > <1 restart time).
> > 
> > Does this concern anyone?  It probably means the upgrades will take longer 
> > if you're going host by host since the time per host will go up.
> > 
> > sage
> > 
> > 
> > * eliminate 'snapdir' objects, replacing them with a head object + 
> > whiteout.
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: would people mind a slow osd restart during luminous upgrade?
       [not found]     ` <CAN-Gep+5wDxpab3b5AN_i1m2uNHVvhWkjMWjmd7Rmi6Xbnciuw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-02-09 14:29       ` Sage Weil
       [not found]         ` <alpine.DEB.2.11.1702091426500.7782-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Sage Weil @ 2017-02-09 14:29 UTC (permalink / raw)
  To: David Turner; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-Qp0mS5GaXlQ

On Thu, 9 Feb 2017, David Turner wrote:
> The only issue I can think of is if there isn't a version of the clients
> fully tested to work with a partially upgraded cluster or a documented
> incompatibility requiring downtime. We've had upgrades where we had to
> upgrade clients first and others that we had to do the clients last due to
> issues with how the clients interacted with an older cluster, partially
> upgraded cluster, or newer cluster.

We maintain client compatibiltity across *many* releases and several 
years.  In general this under the control of the administrator via their 
choice of CRUSH tunables, which effectively let you choose the oldest 
client you'd like to support.

I'm curious which upgrade you had problems with?  Generally speaking the 
only "client" upgrade ordering issue is with the radosgw clients, which 
need to be upgraded after the OSDs.

> If the FileStore is changing this much, I can imagine a Jewel client having
> a hard time locating the objects it needs from a Luminous cluster.

In this case the change would be internal to a single OSD and have no 
effect on the client/osd interaction or placement of objects.

sage

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: would people mind a slow osd restart during luminous upgrade?
       [not found]         ` <alpine.DEB.2.11.1702091426500.7782-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
@ 2017-02-09 17:12           ` David Turner
       [not found]             ` <CAN-GepLfgY-D6UaN+ZLt9Kdo20F4gOUnFPSOpjuNZzy=9bbKQw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: David Turner @ 2017-02-09 17:12 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-Qp0mS5GaXlQ


[-- Attachment #1.1: Type: text/plain, Size: 2267 bytes --]

When we upgraded to Jewel 10.2.3 from Hammer 0.94.7 in our QA cluster we
had issues with client incompatibility.  We first tried upgrading our
clients before upgrading the cluster.  This broke creating RBDs, cloning
RBDs, and probably many other things.  We quickly called that test a wash
and redeployed the cluster back to 0.94.7 and redid the upgrade by
partially upgrading the cluster, testing, fully upgrading the cluster,
testing, and finally upgraded the clients to Jewel.  This worked with no
issues creating RBDs, cloning, snapshots, deleting, etc.

I'm not sure if there was a previous reason that we decided to always
upgrade the clients first.  It might have had to do with the upgrade from
Firefly to Hammer.  It's just something we always test now, especially with
full version upgrades.  That being said, making sure that there is a client
that was regression tested throughout the cluster upgrade would be great to
have in the release notes.

On Thu, Feb 9, 2017 at 7:29 AM Sage Weil <sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

> On Thu, 9 Feb 2017, David Turner wrote:
> > The only issue I can think of is if there isn't a version of the clients
> > fully tested to work with a partially upgraded cluster or a documented
> > incompatibility requiring downtime. We've had upgrades where we had to
> > upgrade clients first and others that we had to do the clients last due
> to
> > issues with how the clients interacted with an older cluster, partially
> > upgraded cluster, or newer cluster.
>
> We maintain client compatibiltity across *many* releases and several
> years.  In general this under the control of the administrator via their
> choice of CRUSH tunables, which effectively let you choose the oldest
> client you'd like to support.
>
> I'm curious which upgrade you had problems with?  Generally speaking the
> only "client" upgrade ordering issue is with the radosgw clients, which
> need to be upgraded after the OSDs.
>
> > If the FileStore is changing this much, I can imagine a Jewel client
> having
> > a hard time locating the objects it needs from a Luminous cluster.
>
> In this case the change would be internal to a single OSD and have no
> effect on the client/osd interaction or placement of objects.
>
> sage
>

[-- Attachment #1.2: Type: text/html, Size: 3360 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: would people mind a slow osd restart during luminous upgrade?
       [not found]             ` <CAN-GepLfgY-D6UaN+ZLt9Kdo20F4gOUnFPSOpjuNZzy=9bbKQw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-02-09 17:24               ` Brian Andrus
  0 siblings, 0 replies; 10+ messages in thread
From: Brian Andrus @ 2017-02-09 17:24 UTC (permalink / raw)
  To: David Turner; +Cc: Sage Weil, Squid Cybernetic, ceph-users


[-- Attachment #1.1: Type: text/plain, Size: 3102 bytes --]

On Thu, Feb 9, 2017 at 9:12 AM, David Turner <drakonstein-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:

> When we upgraded to Jewel 10.2.3 from Hammer 0.94.7 in our QA cluster we
> had issues with client incompatibility.  We first tried upgrading our
> clients before upgrading the cluster.  This broke creating RBDs, cloning
> RBDs, and probably many other things.  We quickly called that test a wash
> and redeployed the cluster back to 0.94.7 and redid the upgrade by
> partially upgrading the cluster, testing, fully upgrading the cluster,
> testing, and finally upgraded the clients to Jewel.  This worked with no
> issues creating RBDs, cloning, snapshots, deleting, etc.
>
> I'm not sure if there was a previous reason that we decided to always
> upgrade the clients first.  It might have had to do with the upgrade from
> Firefly to Hammer.  It's just something we always test now, especially with
> full version upgrades.  That being said, making sure that there is a client
> that was regression tested throughout the cluster upgrade would be great to
> have in the release notes.
>

I agree - it would have been nice to have this in the release notes,
however we only hit it because we're hyperconverged (clients using Jewel
against a Hammer cluster that hasn't yet had daemons restarted). We are
fixing it by setting rbd_default_features = 3 in our upcoming upgrade. We
will then unset it once the cluster is running Jewel.


>
> On Thu, Feb 9, 2017 at 7:29 AM Sage Weil <sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
>
>> On Thu, 9 Feb 2017, David Turner wrote:
>> > The only issue I can think of is if there isn't a version of the clients
>> > fully tested to work with a partially upgraded cluster or a documented
>> > incompatibility requiring downtime. We've had upgrades where we had to
>> > upgrade clients first and others that we had to do the clients last due
>> to
>> > issues with how the clients interacted with an older cluster, partially
>> > upgraded cluster, or newer cluster.
>>
>> We maintain client compatibiltity across *many* releases and several
>> years.  In general this under the control of the administrator via their
>> choice of CRUSH tunables, which effectively let you choose the oldest
>> client you'd like to support.
>>
>> I'm curious which upgrade you had problems with?  Generally speaking the
>> only "client" upgrade ordering issue is with the radosgw clients, which
>> need to be upgraded after the OSDs.
>>
>> > If the FileStore is changing this much, I can imagine a Jewel client
>> having
>> > a hard time locating the objects it needs from a Luminous cluster.
>>
>> In this case the change would be internal to a single OSD and have no
>> effect on the client/osd interaction or placement of objects.
>>
>> sage
>>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 
Brian Andrus | Cloud Systems Engineer | DreamHost
brian.andrus-1Zbx3wZeAm3by3iVrkZq2A@public.gmane.org | www.dreamhost.com

[-- Attachment #1.2: Type: text/html, Size: 5847 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-02-09 17:24 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-09  3:09 would people mind a slow osd restart during luminous upgrade? Sage Weil
     [not found] ` <alpine.DEB.2.11.1702090302470.7782-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
2017-02-09  4:19   ` David Turner
     [not found]     ` <CAN-Gep+5wDxpab3b5AN_i1m2uNHVvhWkjMWjmd7Rmi6Xbnciuw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-02-09 14:29       ` Sage Weil
     [not found]         ` <alpine.DEB.2.11.1702091426500.7782-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
2017-02-09 17:12           ` David Turner
     [not found]             ` <CAN-GepLfgY-D6UaN+ZLt9Kdo20F4gOUnFPSOpjuNZzy=9bbKQw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-02-09 17:24               ` Brian Andrus
2017-02-09  5:57   ` Wido den Hollander
2017-02-09  8:41   ` Henrik Korkuc
     [not found]     ` <24cb8bc3-bda5-467c-b2cb-e9d77ba24679-eal8BvkdEKCHXe+LvDLADg@public.gmane.org>
2017-02-09 11:36       ` Dave Holland
2017-02-09 13:30   ` George Mihaiescu
2017-02-09 14:25     ` [ceph-users] " Sage Weil

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.