All of lore.kernel.org
 help / color / mirror / Atom feed
* Pool setting for recovery priority
@ 2015-09-16 15:48 GuangYang
  2015-09-16 16:23 ` Sage Weil
  0 siblings, 1 reply; 12+ messages in thread
From: GuangYang @ 2015-09-16 15:48 UTC (permalink / raw)
  To: sjust; +Cc: ceph-devel

Hi Sam,
As part of the effort to solve problems similar to issue #13104 (http://tracker.ceph.com/issues/13104), do you think it is appropriate to add some parameters to pool setting:
   1. recovery priority of the pool - we have a customized pool recovery priority (like process's nice value) to favor some pools over others. For example, the bucket index pool is usually much much smaller but important to recover first (e.g. might affect write latency as like issue #13104).
   2. pool level recovery op priority - currently we have a low priority for recovery op (by default it is 10 while client io's priority is 63), is it possible to have a pool setting to customized the priority on pool level.

The purpose is to give some flexibility in terms of favor some pools over others when doing recovery, in our case using radosgw, we would like to favor bucket index pool as that is on the write path for all requests.

Thanks,
Guang 		 	   		  --
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Pool setting for recovery priority
  2015-09-16 15:48 Pool setting for recovery priority GuangYang
@ 2015-09-16 16:23 ` Sage Weil
  2015-09-16 17:58   ` GuangYang
  2015-09-21 13:32   ` Mykola Golub
  0 siblings, 2 replies; 12+ messages in thread
From: Sage Weil @ 2015-09-16 16:23 UTC (permalink / raw)
  To: GuangYang; +Cc: sjust, ceph-devel

[-- Attachment #1: Type: TEXT/PLAIN, Size: 1280 bytes --]

On Wed, 16 Sep 2015, GuangYang wrote:
> Hi Sam,
> As part of the effort to solve problems similar to issue #13104 (http://tracker.ceph.com/issues/13104), do you think it is appropriate to add some parameters to pool setting:
>    1. recovery priority of the pool - we have a customized pool recovery priority (like process's nice value) to favor some pools over others. For example, the bucket index pool is usually much much smaller but important to recover first (e.g. might affect write latency as like issue #13104).
>    2. pool level recovery op priority - currently we have a low priority for recovery op (by default it is 10 while client io's priority is 63), is it possible to have a pool setting to customized the priority on pool level.
> 
> The purpose is to give some flexibility in terms of favor some pools over others when doing recovery, in our case using radosgw, we would like to favor bucket index pool as that is on the write path for all requests.

I think this makes sense, and is analogous to

	https://github.com/ceph/ceph/pull/5922

which does per-pool scrub settings.  I think the only real question is 
whether pg_pool_t is the right place to keep piling these parameters in, 
or whether we want some unstructured key/value settings or something.

sage

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: Pool setting for recovery priority
  2015-09-16 16:23 ` Sage Weil
@ 2015-09-16 17:58   ` GuangYang
  2015-09-21 13:32   ` Mykola Golub
  1 sibling, 0 replies; 12+ messages in thread
From: GuangYang @ 2015-09-16 17:58 UTC (permalink / raw)
  To: Weil Sage; +Cc: sjust, ceph-devel

Thanks Sage, just opened a tracker for this - http://tracker.ceph.com/issues/13121.

Thanks,
Guang

----------------------------------------
> Date: Wed, 16 Sep 2015 09:23:07 -0700
> From: sweil@redhat.com
> To: yguang11@outlook.com
> CC: sjust@redhat.com; ceph-devel@vger.kernel.org
> Subject: Re: Pool setting for recovery priority
>
> On Wed, 16 Sep 2015, GuangYang wrote:
>> Hi Sam,
>> As part of the effort to solve problems similar to issue #13104 (http://tracker.ceph.com/issues/13104), do you think it is appropriate to add some parameters to pool setting:
>> 1. recovery priority of the pool - we have a customized pool recovery priority (like process's nice value) to favor some pools over others. For example, the bucket index pool is usually much much smaller but important to recover first (e.g. might affect write latency as like issue #13104).
>> 2. pool level recovery op priority - currently we have a low priority for recovery op (by default it is 10 while client io's priority is 63), is it possible to have a pool setting to customized the priority on pool level.
>>
>> The purpose is to give some flexibility in terms of favor some pools over others when doing recovery, in our case using radosgw, we would like to favor bucket index pool as that is on the write path for all requests.
>
> I think this makes sense, and is analogous to
>
> https://github.com/ceph/ceph/pull/5922
>
> which does per-pool scrub settings. I think the only real question is
> whether pg_pool_t is the right place to keep piling these parameters in,
> or whether we want some unstructured key/value settings or something.
>
> sage
 		 	   		  

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Pool setting for recovery priority
  2015-09-16 16:23 ` Sage Weil
  2015-09-16 17:58   ` GuangYang
@ 2015-09-21 13:32   ` Mykola Golub
  2015-09-25 11:44     ` Mykola Golub
  1 sibling, 1 reply; 12+ messages in thread
From: Mykola Golub @ 2015-09-21 13:32 UTC (permalink / raw)
  To: Sage Weil; +Cc: GuangYang, sjust, ceph-devel

On Wed, Sep 16, 2015 at 09:23:07AM -0700, Sage Weil wrote:
> On Wed, 16 Sep 2015, GuangYang wrote:
> > Hi Sam,
> > As part of the effort to solve problems similar to issue #13104 (http://tracker.ceph.com/issues/13104), do you think it is appropriate to add some parameters to pool setting:
> >    1. recovery priority of the pool - we have a customized pool recovery priority (like process's nice value) to favor some pools over others. For example, the bucket index pool is usually much much smaller but important to recover first (e.g. might affect write latency as like issue #13104).
> >    2. pool level recovery op priority - currently we have a low priority for recovery op (by default it is 10 while client io's priority is 63), is it possible to have a pool setting to customized the priority on pool level.
> > 
> > The purpose is to give some flexibility in terms of favor some pools over others when doing recovery, in our case using radosgw, we would like to favor bucket index pool as that is on the write path for all requests.
> 
> I think this makes sense, and is analogous to
> 
> 	https://github.com/ceph/ceph/pull/5922
> 
> which does per-pool scrub settings.  I think the only real question is 
> whether pg_pool_t is the right place to keep piling these parameters in, 
> or whether we want some unstructured key/value settings or something.

I aggree that adding a bunch of new rarely used fields to pg_pool_t
might not be a very good idea. Still storing these options here looks
convenient (accessing, updating...). What do you think if I add
something like this in pg_pool_t instead?

  typedef boost::variant<string,int,double> pool_opt_value_t;
  typedef std::map<pool_opt_key_t,pool_opt_value_t> opts_t;
  opts_t opts;

(in reality I suppose it will be more compicated but will have
something like this in base).

Usually opts will be empty or have only one or two settings, so it
will not consume much space.

Or where do you suggest to store them instead?

BTW, I see we already have in pg_pool_t:

  map<string,string> properties;  ///< OBSOLETE

I wonder what it was supposed to be used for and why it is marked
obsolete?

-- 
Mykola Golub
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Pool setting for recovery priority
  2015-09-21 13:32   ` Mykola Golub
@ 2015-09-25 11:44     ` Mykola Golub
  2015-09-25 14:09       ` Sage Weil
  0 siblings, 1 reply; 12+ messages in thread
From: Mykola Golub @ 2015-09-25 11:44 UTC (permalink / raw)
  To: Sage Weil; +Cc: GuangYang, sjust, ceph-devel, David Zafman

Hi,

On Mon, Sep 21, 2015 at 04:32:19PM +0300, Mykola Golub wrote:
> On Wed, Sep 16, 2015 at 09:23:07AM -0700, Sage Weil wrote:
> > On Wed, 16 Sep 2015, GuangYang wrote:
> > > Hi Sam,
> > > As part of the effort to solve problems similar to issue #13104 (http://tracker.ceph.com/issues/13104), do you think it is appropriate to add some parameters to pool setting:
> > >    1. recovery priority of the pool - we have a customized pool recovery priority (like process's nice value) to favor some pools over others. For example, the bucket index pool is usually much much smaller but important to recover first (e.g. might affect write latency as like issue #13104).
> > >    2. pool level recovery op priority - currently we have a low priority for recovery op (by default it is 10 while client io's priority is 63), is it possible to have a pool setting to customized the priority on pool level.
> > > 
> > > The purpose is to give some flexibility in terms of favor some pools over others when doing recovery, in our case using radosgw, we would like to favor bucket index pool as that is on the write path for all requests.
> > 
> > I think this makes sense, and is analogous to
> > 
> > 	https://github.com/ceph/ceph/pull/5922
> > 
> > which does per-pool scrub settings.  I think the only real question is 
> > whether pg_pool_t is the right place to keep piling these parameters in, 
> > or whether we want some unstructured key/value settings or something.
> 
> I aggree that adding a bunch of new rarely used fields to pg_pool_t
> might not be a very good idea. Still storing these options here looks
> convenient (accessing, updating...). What do you think if I add
> something like this in pg_pool_t instead?
> 
>   typedef boost::variant<string,int,double> pool_opt_value_t;
>   typedef std::map<pool_opt_key_t,pool_opt_value_t> opts_t;
>   opts_t opts;
> 
> (in reality I suppose it will be more compicated but will have
> something like this in base).
> 
> Usually opts will be empty or have only one or two settings, so it
> will not consume much space.

What do you think about this implementation, which adds a dictionary
for pool options to pg_pool_t?

https://github.com/ceph/ceph/pull/6081

Although #5922 has already been merged to master, I think it is still
not late to change scrub intervals to be stored in options?

> 
> Or where do you suggest to store them instead?
> 
> BTW, I see we already have in pg_pool_t:
> 
>   map<string,string> properties;  ///< OBSOLETE
> 
> I wonder what it was supposed to be used for and why it is marked
> obsolete?
> 
> -- 
> Mykola Golub

-- 
Mykola Golub
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Pool setting for recovery priority
  2015-09-25 11:44     ` Mykola Golub
@ 2015-09-25 14:09       ` Sage Weil
  2015-09-25 17:50         ` Mykola Golub
  2015-09-29 17:23         ` GuangYang
  0 siblings, 2 replies; 12+ messages in thread
From: Sage Weil @ 2015-09-25 14:09 UTC (permalink / raw)
  To: Mykola Golub; +Cc: GuangYang, sjust, ceph-devel, David Zafman

[-- Attachment #1: Type: TEXT/PLAIN, Size: 3232 bytes --]

Hi Mykola,

On Fri, 25 Sep 2015, Mykola Golub wrote:
> Hi,
> 
> On Mon, Sep 21, 2015 at 04:32:19PM +0300, Mykola Golub wrote:
> > On Wed, Sep 16, 2015 at 09:23:07AM -0700, Sage Weil wrote:
> > > On Wed, 16 Sep 2015, GuangYang wrote:
> > > > Hi Sam,
> > > > As part of the effort to solve problems similar to issue #13104 (http://tracker.ceph.com/issues/13104), do you think it is appropriate to add some parameters to pool setting:
> > > >    1. recovery priority of the pool - we have a customized pool recovery priority (like process's nice value) to favor some pools over others. For example, the bucket index pool is usually much much smaller but important to recover first (e.g. might affect write latency as like issue #13104).
> > > >    2. pool level recovery op priority - currently we have a low priority for recovery op (by default it is 10 while client io's priority is 63), is it possible to have a pool setting to customized the priority on pool level.
> > > > 
> > > > The purpose is to give some flexibility in terms of favor some pools over others when doing recovery, in our case using radosgw, we would like to favor bucket index pool as that is on the write path for all requests.
> > > 
> > > I think this makes sense, and is analogous to
> > > 
> > > 	https://github.com/ceph/ceph/pull/5922
> > > 
> > > which does per-pool scrub settings.  I think the only real question is 
> > > whether pg_pool_t is the right place to keep piling these parameters in, 
> > > or whether we want some unstructured key/value settings or something.
> > 
> > I aggree that adding a bunch of new rarely used fields to pg_pool_t
> > might not be a very good idea. Still storing these options here looks
> > convenient (accessing, updating...). What do you think if I add
> > something like this in pg_pool_t instead?
> > 
> >   typedef boost::variant<string,int,double> pool_opt_value_t;
> >   typedef std::map<pool_opt_key_t,pool_opt_value_t> opts_t;
> >   opts_t opts;
> > 
> > (in reality I suppose it will be more compicated but will have
> > something like this in base).
> > 
> > Usually opts will be empty or have only one or two settings, so it
> > will not consume much space.
> 
> What do you think about this implementation, which adds a dictionary
> for pool options to pg_pool_t?
> 
> https://github.com/ceph/ceph/pull/6081
> 
> Although #5922 has already been merged to master, I think it is still
> not late to change scrub intervals to be stored in options?

Yeah, I agree that something along these lines is better.  It's too late 
to add this to infernalis, though.. I think we should revert the scrub 
interval options and then use the dictionary (post-infernalis).

How does that sound?
sage


> 
> > 
> > Or where do you suggest to store them instead?
> > 
> > BTW, I see we already have in pg_pool_t:
> > 
> >   map<string,string> properties;  ///< OBSOLETE
> > 
> > I wonder what it was supposed to be used for and why it is marked
> > obsolete?
> > 
> > -- 
> > Mykola Golub
> 
> -- 
> Mykola Golub
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Pool setting for recovery priority
  2015-09-25 14:09       ` Sage Weil
@ 2015-09-25 17:50         ` Mykola Golub
  2015-09-25 18:02           ` Sage Weil
  2015-09-29 17:23         ` GuangYang
  1 sibling, 1 reply; 12+ messages in thread
From: Mykola Golub @ 2015-09-25 17:50 UTC (permalink / raw)
  To: Sage Weil; +Cc: Mykola Golub, GuangYang, sjust, ceph-devel, David Zafman

On Fri, Sep 25, 2015 at 07:09:53AM -0700, Sage Weil wrote:
> Hi Mykola,
> 
> On Fri, 25 Sep 2015, Mykola Golub wrote:
> > What do you think about this implementation, which adds a dictionary
> > for pool options to pg_pool_t?
> > 
> > https://github.com/ceph/ceph/pull/6081
> > 
> > Although #5922 has already been merged to master, I think it is still
> > not late to change scrub intervals to be stored in options?
> 
> Yeah, I agree that something along these lines is better.  It's too late 
> to add this to infernalis, though.. I think we should revert the scrub 
> interval options and then use the dictionary (post-infernalis).
> 
> How does that sound?

I sounds good to me. I will rebase #6081 against master after the
previous patch is reverted and mark it [DNM].

-- 
Mykola Golub

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Pool setting for recovery priority
  2015-09-25 17:50         ` Mykola Golub
@ 2015-09-25 18:02           ` Sage Weil
  2015-09-25 18:09             ` Mykola Golub
  0 siblings, 1 reply; 12+ messages in thread
From: Sage Weil @ 2015-09-25 18:02 UTC (permalink / raw)
  To: Mykola Golub; +Cc: Mykola Golub, GuangYang, sjust, ceph-devel, David Zafman

On Fri, 25 Sep 2015, Mykola Golub wrote:
> On Fri, Sep 25, 2015 at 07:09:53AM -0700, Sage Weil wrote:
> > Hi Mykola,
> > 
> > On Fri, 25 Sep 2015, Mykola Golub wrote:
> > > What do you think about this implementation, which adds a dictionary
> > > for pool options to pg_pool_t?
> > > 
> > > https://github.com/ceph/ceph/pull/6081
> > > 
> > > Although #5922 has already been merged to master, I think it is still
> > > not late to change scrub intervals to be stored in options?
> > 
> > Yeah, I agree that something along these lines is better.  It's too late 
> > to add this to infernalis, though.. I think we should revert the scrub 
> > interval options and then use the dictionary (post-infernalis).
> > 
> > How does that sound?
> 
> I sounds good to me. I will rebase #6081 against master after the
> previous patch is reverted and mark it [DNM].

It's just this last commit, right?

https://github.com/ceph/ceph/pull/6084

sage

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Pool setting for recovery priority
  2015-09-25 18:02           ` Sage Weil
@ 2015-09-25 18:09             ` Mykola Golub
  0 siblings, 0 replies; 12+ messages in thread
From: Mykola Golub @ 2015-09-25 18:09 UTC (permalink / raw)
  To: Sage Weil; +Cc: Mykola Golub, GuangYang, sjust, ceph-devel, David Zafman

On Fri, Sep 25, 2015 at 11:02:36AM -0700, Sage Weil wrote:

> It's just this last commit, right?
> 
> https://github.com/ceph/ceph/pull/6084

Yes, thanks.

-- 
Mykola Golub

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: Pool setting for recovery priority
  2015-09-25 14:09       ` Sage Weil
  2015-09-25 17:50         ` Mykola Golub
@ 2015-09-29 17:23         ` GuangYang
  2015-09-29 18:47           ` Mykola Golub
  1 sibling, 1 reply; 12+ messages in thread
From: GuangYang @ 2015-09-29 17:23 UTC (permalink / raw)
  To: Weil Sage, Mykola Golub; +Cc: sjust, ceph-devel, David Zafman

I sort of misunderstood the pain point here, I thought we wanted to simplify from CLI side to set those parameters as we have more and more pool level setting, not from an internal implementation's perspective..

Thanks Mykola for implementing this, I will rebase the priority setting stuff against it.

Thanks,
Guang

----------------------------------------
> Date: Fri, 25 Sep 2015 07:09:53 -0700
> From: sweil@redhat.com
> To: mgolub@mirantis.com
> CC: yguang11@outlook.com; sjust@redhat.com; ceph-devel@vger.kernel.org; dzafman@redhat.com
> Subject: Re: Pool setting for recovery priority
>
> Hi Mykola,
>
> On Fri, 25 Sep 2015, Mykola Golub wrote:
>> Hi,
>>
>> On Mon, Sep 21, 2015 at 04:32:19PM +0300, Mykola Golub wrote:
>>> On Wed, Sep 16, 2015 at 09:23:07AM -0700, Sage Weil wrote:
>>>> On Wed, 16 Sep 2015, GuangYang wrote:
>>>>> Hi Sam,
>>>>> As part of the effort to solve problems similar to issue #13104 (http://tracker.ceph.com/issues/13104), do you think it is appropriate to add some parameters to pool setting:
>>>>> 1. recovery priority of the pool - we have a customized pool recovery priority (like process's nice value) to favor some pools over others. For example, the bucket index pool is usually much much smaller but important to recover first (e.g. might affect write latency as like issue #13104).
>>>>> 2. pool level recovery op priority - currently we have a low priority for recovery op (by default it is 10 while client io's priority is 63), is it possible to have a pool setting to customized the priority on pool level.
>>>>>
>>>>> The purpose is to give some flexibility in terms of favor some pools over others when doing recovery, in our case using radosgw, we would like to favor bucket index pool as that is on the write path for all requests.
>>>>
>>>> I think this makes sense, and is analogous to
>>>>
>>>> https://github.com/ceph/ceph/pull/5922
>>>>
>>>> which does per-pool scrub settings. I think the only real question is
>>>> whether pg_pool_t is the right place to keep piling these parameters in,
>>>> or whether we want some unstructured key/value settings or something.
>>>
>>> I aggree that adding a bunch of new rarely used fields to pg_pool_t
>>> might not be a very good idea. Still storing these options here looks
>>> convenient (accessing, updating...). What do you think if I add
>>> something like this in pg_pool_t instead?
>>>
>>> typedef boost::variant<string,int,double> pool_opt_value_t;
>>> typedef std::map<pool_opt_key_t,pool_opt_value_t> opts_t;
>>> opts_t opts;
>>>
>>> (in reality I suppose it will be more compicated but will have
>>> something like this in base).
>>>
>>> Usually opts will be empty or have only one or two settings, so it
>>> will not consume much space.
>>
>> What do you think about this implementation, which adds a dictionary
>> for pool options to pg_pool_t?
>>
>> https://github.com/ceph/ceph/pull/6081
>>
>> Although #5922 has already been merged to master, I think it is still
>> not late to change scrub intervals to be stored in options?
>
> Yeah, I agree that something along these lines is better. It's too late
> to add this to infernalis, though.. I think we should revert the scrub
> interval options and then use the dictionary (post-infernalis).
>
> How does that sound?
> sage
>
>
>>
>>>
>>> Or where do you suggest to store them instead?
>>>
>>> BTW, I see we already have in pg_pool_t:
>>>
>>> map<string,string> properties; ///< OBSOLETE
>>>
>>> I wonder what it was supposed to be used for and why it is marked
>>> obsolete?
>>>
>>> --
>>> Mykola Golub
>>
>> --
>> Mykola Golub
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>>
 		 	   		  

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Pool setting for recovery priority
  2015-09-29 17:23         ` GuangYang
@ 2015-09-29 18:47           ` Mykola Golub
  2015-09-29 20:57             ` GuangYang
  0 siblings, 1 reply; 12+ messages in thread
From: Mykola Golub @ 2015-09-29 18:47 UTC (permalink / raw)
  To: GuangYang; +Cc: Weil Sage, Mykola Golub, sjust, ceph-devel, David Zafman

On Tue, Sep 29, 2015 at 10:23:54AM -0700, GuangYang wrote:

> I sort of misunderstood the pain point here, I thought we wanted to
> simplify from CLI side to set those parameters as we have more and
> more pool level setting, not from an internal implementation's
> perspective..
>
> Thanks Mykola for implementing this, I will rebase the priority
> setting stuff against it.

Ah, so you are working on adding per pool priority options? Because I
have just started working on this too, but if you already do this I
will abandon it.

-- 
Mykola Golub

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: Pool setting for recovery priority
  2015-09-29 18:47           ` Mykola Golub
@ 2015-09-29 20:57             ` GuangYang
  0 siblings, 0 replies; 12+ messages in thread
From: GuangYang @ 2015-09-29 20:57 UTC (permalink / raw)
  To: Mykola Golub; +Cc: Weil Sage, Mykola Golub, sjust, ceph-devel, David Zafman

Yeah it is this one - https://github.com/ceph/ceph/pull/5953. It is not conflicted with your work but just need a rebase.. I will do once your PR pass the review. Thanks.

----------------------------------------
> Date: Tue, 29 Sep 2015 21:47:42 +0300
> From: to.my.trociny@gmail.com
> To: yguang11@outlook.com
> CC: sweil@redhat.com; mgolub@mirantis.com; sjust@redhat.com; ceph-devel@vger.kernel.org; dzafman@redhat.com
> Subject: Re: Pool setting for recovery priority
>
> On Tue, Sep 29, 2015 at 10:23:54AM -0700, GuangYang wrote:
>
>> I sort of misunderstood the pain point here, I thought we wanted to
>> simplify from CLI side to set those parameters as we have more and
>> more pool level setting, not from an internal implementation's
>> perspective..
>>
>> Thanks Mykola for implementing this, I will rebase the priority
>> setting stuff against it.
>
> Ah, so you are working on adding per pool priority options? Because I
> have just started working on this too, but if you already do this I
> will abandon it.
>
> --
> Mykola Golub
 		 	   		  

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2015-09-29 20:57 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-09-16 15:48 Pool setting for recovery priority GuangYang
2015-09-16 16:23 ` Sage Weil
2015-09-16 17:58   ` GuangYang
2015-09-21 13:32   ` Mykola Golub
2015-09-25 11:44     ` Mykola Golub
2015-09-25 14:09       ` Sage Weil
2015-09-25 17:50         ` Mykola Golub
2015-09-25 18:02           ` Sage Weil
2015-09-25 18:09             ` Mykola Golub
2015-09-29 17:23         ` GuangYang
2015-09-29 18:47           ` Mykola Golub
2015-09-29 20:57             ` GuangYang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.