All of lore.kernel.org
 help / color / mirror / Atom feed
* PG state issue
@ 2012-03-01  9:43 Henry C Chang
  2012-03-01 10:34 ` Wido den Hollander
  2012-03-01 16:14 ` Sage Weil
  0 siblings, 2 replies; 5+ messages in thread
From: Henry C Chang @ 2012-03-01  9:43 UTC (permalink / raw)
  To: ceph-devel

Hi,

With version 0.42, I found that the pg's state is not in "degrade" any
more if the number of osds is smaller than that of the replication.
For example, if I create a cluster of one osd with replication 2, all
pgs are in active+clean state. (The pgs were in active+clean+degrade
state in the earlier versions.)

I think it should be a bug in terms of cluster status although it does
not cause any other problems to me so far.

Henry

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: PG state issue
  2012-03-01  9:43 PG state issue Henry C Chang
@ 2012-03-01 10:34 ` Wido den Hollander
  2012-03-01 16:14 ` Sage Weil
  1 sibling, 0 replies; 5+ messages in thread
From: Wido den Hollander @ 2012-03-01 10:34 UTC (permalink / raw)
  To: Henry C Chang; +Cc: ceph-devel

Hi,

On 03/01/2012 10:43 AM, Henry C Chang wrote:
> Hi,
> With version 0.42, I found that the pg's state is not in "degrade" any
> more if the number of osds is smaller than that of the replication.
> For example, if I create a cluster of one osd with replication 2, all
> pgs are in active+clean state. (The pgs were in active+clean+degrade
> state in the earlier versions.)

Just a conformation from my side.

Currently I have 3 OSD's which our down/out and only one PG went into 
degraded state. The weird thing is, these OSD's are not acting for this PG.

The OSD's for this PG are all up and in.

Wido

>
> I think it should be a bug in terms of cluster status although it does
> not cause any other problems to me so far.
>
> Henry
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: PG state issue
  2012-03-01  9:43 PG state issue Henry C Chang
  2012-03-01 10:34 ` Wido den Hollander
@ 2012-03-01 16:14 ` Sage Weil
  2012-03-05  9:18   ` Henry C Chang
  1 sibling, 1 reply; 5+ messages in thread
From: Sage Weil @ 2012-03-01 16:14 UTC (permalink / raw)
  To: Henry C Chang; +Cc: ceph-devel

Hi Henry, Wido,

On Thu, 1 Mar 2012, Henry C Chang wrote:
> With version 0.42, I found that the pg's state is not in "degrade" any
> more if the number of osds is smaller than that of the replication.
> For example, if I create a cluster of one osd with replication 2, all
> pgs are in active+clean state. (The pgs were in active+clean+degrade
> state in the earlier versions.)

The PG states were tweaked a fair bit for v0.43:

- new 'recovering' state means we are actively recovering the PG (no 
  longer implied by lack of 'clean')
- 'remapped' means we have temporarily remapped a pg to a specific set of 
  OSDs (other than what CRUSH gives us)
- 'clean' specifically means we have the right number of replicas and 
  aren't remapped.

...and the 'degraded' thing you are seeing is fixed.  This is all in place 
in the 'next' or 'master' branches.

> I think it should be a bug in terms of cluster status although it does
> not cause any other problems to me so far.

Yeah, it's simply a matter of how the internal state is displayed/reported 
to the moitor.

sage

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: PG state issue
  2012-03-01 16:14 ` Sage Weil
@ 2012-03-05  9:18   ` Henry C Chang
  2012-03-05 15:28     ` Sage Weil
  0 siblings, 1 reply; 5+ messages in thread
From: Henry C Chang @ 2012-03-05  9:18 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel

Hi Sage,

I just tested v0.43. All pgs are shown active+degraded now. However,
is it possible to show the number (or percentage) of degraded objects
as in the earlier version?

Henry

在 2012年3月2日上午12:14,Sage Weil <sage@newdream.net> 寫道:
> Hi Henry, Wido,
>
> On Thu, 1 Mar 2012, Henry C Chang wrote:
>> With version 0.42, I found that the pg's state is not in "degrade" any
>> more if the number of osds is smaller than that of the replication.
>> For example, if I create a cluster of one osd with replication 2, all
>> pgs are in active+clean state. (The pgs were in active+clean+degrade
>> state in the earlier versions.)
>
> The PG states were tweaked a fair bit for v0.43:
>
> - new 'recovering' state means we are actively recovering the PG (no
>  longer implied by lack of 'clean')
> - 'remapped' means we have temporarily remapped a pg to a specific set of
>  OSDs (other than what CRUSH gives us)
> - 'clean' specifically means we have the right number of replicas and
>  aren't remapped.
>
> ...and the 'degraded' thing you are seeing is fixed.  This is all in place
> in the 'next' or 'master' branches.
>
>> I think it should be a bug in terms of cluster status although it does
>> not cause any other problems to me so far.
>
> Yeah, it's simply a matter of how the internal state is displayed/reported
> to the moitor.
>
> sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: PG state issue
  2012-03-05  9:18   ` Henry C Chang
@ 2012-03-05 15:28     ` Sage Weil
  0 siblings, 0 replies; 5+ messages in thread
From: Sage Weil @ 2012-03-05 15:28 UTC (permalink / raw)
  To: Henry C Chang; +Cc: ceph-devel

[-- Attachment #1: Type: TEXT/PLAIN, Size: 1830 bytes --]

On Mon, 5 Mar 2012, Henry C Chang wrote:
> Hi Sage,
> 
> I just tested v0.43. All pgs are shown active+degraded now. However,
> is it possible to show the number (or percentage) of degraded objects
> as in the earlier version?

Yeah.  Just opened http://tracker.newdream.net/issues/2137, will look at 
it this week.

Thanks!
sage


> 
> Henry
> 
> ÿÿ 2012ÿÿ3ÿÿ2ÿÿÿÿÿÿ12:14ÿÿSage Weil <sage@newdream.net> ÿÿÿÿÿÿ
> > Hi Henry, Wido,
> >
> > On Thu, 1 Mar 2012, Henry C Chang wrote:
> >> With version 0.42, I found that the pg's state is not in "degrade" any
> >> more if the number of osds is smaller than that of the replication.
> >> For example, if I create a cluster of one osd with replication 2, all
> >> pgs are in active+clean state. (The pgs were in active+clean+degrade
> >> state in the earlier versions.)
> >
> > The PG states were tweaked a fair bit for v0.43:
> >
> > - new 'recovering' state means we are actively recovering the PG (no
> >  longer implied by lack of 'clean')
> > - 'remapped' means we have temporarily remapped a pg to a specific set of
> >  OSDs (other than what CRUSH gives us)
> > - 'clean' specifically means we have the right number of replicas and
> >  aren't remapped.
> >
> > ...and the 'degraded' thing you are seeing is fixed.  This is all in place
> > in the 'next' or 'master' branches.
> >
> >> I think it should be a bug in terms of cluster status although it does
> >> not cause any other problems to me so far.
> >
> > Yeah, it's simply a matter of how the internal state is displayed/reported
> > to the moitor.
> >
> > sage
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2012-03-05 15:28 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-03-01  9:43 PG state issue Henry C Chang
2012-03-01 10:34 ` Wido den Hollander
2012-03-01 16:14 ` Sage Weil
2012-03-05  9:18   ` Henry C Chang
2012-03-05 15:28     ` Sage Weil

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.