All of lore.kernel.org
 help / color / mirror / Atom feed
* log messages about inconsistent data
@ 2011-01-24  7:18 Ravi Pinjala
  2011-01-24 18:40 ` Samuel Just
  0 siblings, 1 reply; 4+ messages in thread
From: Ravi Pinjala @ 2011-01-24  7:18 UTC (permalink / raw)
  To: ceph-devel

Do I need to be worried about this?

2011-01-23 23:12:06.328866   log 2011-01-23 23:12:05.316993 osd1
192.168.1.11:6801/9447 45 : [ERR] 1.1 scrub osd0 missing
10000017737.00000000/head
2011-01-23 23:12:06.328866   log 2011-01-23 23:12:05.317429 osd1
192.168.1.11:6801/9447 46 : [ERR] 1.1 scrub stat mismatch, got 7/136
objects, 0/0 clones, 12356/8682277 bytes, 17/8550 kb.
2011-01-23 23:12:08.230768    pg v129643: 270 pgs: 262 active+clean, 8
active+clean+inconsistent; 877 GB data, 1707 GB used, 1320 GB / 3036
GB avail

I would expect ceph to fix the inconsistent PGs at this point, but it
just continues background scrubbing. Does inconsistent data get
cleaned up automatically from other replicas, or is there something
that I need to fix manually here?

--Ravi

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: log messages about inconsistent data
  2011-01-24  7:18 log messages about inconsistent data Ravi Pinjala
@ 2011-01-24 18:40 ` Samuel Just
       [not found]   ` <AANLkTi=y0KcoerXLdfoeaHan8C5FVopuQYZp8hVg6o=Z@mail.gmail.com>
  0 siblings, 1 reply; 4+ messages in thread
From: Samuel Just @ 2011-01-24 18:40 UTC (permalink / raw)
  Cc: ceph-devel

  ceph pg repair <pgid> should cause the osd to repair the
inconsistency in most cases.  You can get the pgid by grepping
ceph pg dump for the inconsistent pg.
-Sam

On 01/23/2011 11:18 PM, Ravi Pinjala wrote:
> Do I need to be worried about this?
>
> 2011-01-23 23:12:06.328866   log 2011-01-23 23:12:05.316993 osd1
> 192.168.1.11:6801/9447 45 : [ERR] 1.1 scrub osd0 missing
> 10000017737.00000000/head
> 2011-01-23 23:12:06.328866   log 2011-01-23 23:12:05.317429 osd1
> 192.168.1.11:6801/9447 46 : [ERR] 1.1 scrub stat mismatch, got 7/136
> objects, 0/0 clones, 12356/8682277 bytes, 17/8550 kb.
> 2011-01-23 23:12:08.230768    pg v129643: 270 pgs: 262 active+clean, 8
> active+clean+inconsistent; 877 GB data, 1707 GB used, 1320 GB / 3036
> GB avail
>
> I would expect ceph to fix the inconsistent PGs at this point, but it
> just continues background scrubbing. Does inconsistent data get
> cleaned up automatically from other replicas, or is there something
> that I need to fix manually here?
>
> --Ravi
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: log messages about inconsistent data
       [not found]   ` <AANLkTi=y0KcoerXLdfoeaHan8C5FVopuQYZp8hVg6o=Z@mail.gmail.com>
@ 2011-01-25  9:00     ` Ravi Pinjala
  2011-01-25 19:15       ` Samuel Just
  0 siblings, 1 reply; 4+ messages in thread
From: Ravi Pinjala @ 2011-01-25  9:00 UTC (permalink / raw)
  To: ceph-devel

A follow-up question to this: After repairing those PGs, my cluster
seems to have come to rest in this state.

2011-01-25 00:26:58.447007    pg v130979: 270 pgs: 8 active, 262
active+clean; 822 GB data, 1762 GB used, 1265 GB / 3036 GB avail;
25/556114 degraded (0.004%)

I don't know whether or not it's in a functional state, since I'm
having MDS issues [1], so I can't actually mount it and poke around.
Still, should I be worried that those 8 PGs aren't being marked
'clean'?

[1] http://tracker.newdream.net/issues/733

On Mon, Jan 24, 2011 at 7:45 PM, Ravi Pinjala <pstatic@gmail.com> wrote:
> That seems to be working. Thanks!
>
> --Ravi
>
> On Mon, Jan 24, 2011 at 10:40 AM, Samuel Just <samuelj@hq.newdream.net> wrote:
>>  ceph pg repair <pgid> should cause the osd to repair the
>> inconsistency in most cases.  You can get the pgid by grepping
>> ceph pg dump for the inconsistent pg.
>> -Sam
>>
>> On 01/23/2011 11:18 PM, Ravi Pinjala wrote:
>>>
>>> Do I need to be worried about this?
>>>
>>> 2011-01-23 23:12:06.328866   log 2011-01-23 23:12:05.316993 osd1
>>> 192.168.1.11:6801/9447 45 : [ERR] 1.1 scrub osd0 missing
>>> 10000017737.00000000/head
>>> 2011-01-23 23:12:06.328866   log 2011-01-23 23:12:05.317429 osd1
>>> 192.168.1.11:6801/9447 46 : [ERR] 1.1 scrub stat mismatch, got 7/136
>>> objects, 0/0 clones, 12356/8682277 bytes, 17/8550 kb.
>>> 2011-01-23 23:12:08.230768    pg v129643: 270 pgs: 262 active+clean, 8
>>> active+clean+inconsistent; 877 GB data, 1707 GB used, 1320 GB / 3036
>>> GB avail
>>>
>>> I would expect ceph to fix the inconsistent PGs at this point, but it
>>> just continues background scrubbing. Does inconsistent data get
>>> cleaned up automatically from other replicas, or is there something
>>> that I need to fix manually here?
>>>
>>> --Ravi
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: log messages about inconsistent data
  2011-01-25  9:00     ` Ravi Pinjala
@ 2011-01-25 19:15       ` Samuel Just
  0 siblings, 0 replies; 4+ messages in thread
From: Samuel Just @ 2011-01-25 19:15 UTC (permalink / raw)
  To: ceph-devel

  Could you post the output of 'ceph pg dump -o -' ?
-Sam

On 01/25/2011 01:00 AM, Ravi Pinjala wrote:
> A follow-up question to this: After repairing those PGs, my cluster
> seems to have come to rest in this state.
>
> 2011-01-25 00:26:58.447007    pg v130979: 270 pgs: 8 active, 262
> active+clean; 822 GB data, 1762 GB used, 1265 GB / 3036 GB avail;
> 25/556114 degraded (0.004%)
>
> I don't know whether or not it's in a functional state, since I'm
> having MDS issues [1], so I can't actually mount it and poke around.
> Still, should I be worried that those 8 PGs aren't being marked
> 'clean'?
>
> [1] http://tracker.newdream.net/issues/733
>
> On Mon, Jan 24, 2011 at 7:45 PM, Ravi Pinjala<pstatic@gmail.com>  wrote:
>> That seems to be working. Thanks!
>>
>> --Ravi
>>
>> On Mon, Jan 24, 2011 at 10:40 AM, Samuel Just<samuelj@hq.newdream.net>  wrote:
>>>   ceph pg repair<pgid>  should cause the osd to repair the
>>> inconsistency in most cases.  You can get the pgid by grepping
>>> ceph pg dump for the inconsistent pg.
>>> -Sam
>>>
>>> On 01/23/2011 11:18 PM, Ravi Pinjala wrote:
>>>> Do I need to be worried about this?
>>>>
>>>> 2011-01-23 23:12:06.328866   log 2011-01-23 23:12:05.316993 osd1
>>>> 192.168.1.11:6801/9447 45 : [ERR] 1.1 scrub osd0 missing
>>>> 10000017737.00000000/head
>>>> 2011-01-23 23:12:06.328866   log 2011-01-23 23:12:05.317429 osd1
>>>> 192.168.1.11:6801/9447 46 : [ERR] 1.1 scrub stat mismatch, got 7/136
>>>> objects, 0/0 clones, 12356/8682277 bytes, 17/8550 kb.
>>>> 2011-01-23 23:12:08.230768    pg v129643: 270 pgs: 262 active+clean, 8
>>>> active+clean+inconsistent; 877 GB data, 1707 GB used, 1320 GB / 3036
>>>> GB avail
>>>>
>>>> I would expect ceph to fix the inconsistent PGs at this point, but it
>>>> just continues background scrubbing. Does inconsistent data get
>>>> cleaned up automatically from other replicas, or is there something
>>>> that I need to fix manually here?
>>>>
>>>> --Ravi
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2011-01-25 19:17 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-01-24  7:18 log messages about inconsistent data Ravi Pinjala
2011-01-24 18:40 ` Samuel Just
     [not found]   ` <AANLkTi=y0KcoerXLdfoeaHan8C5FVopuQYZp8hVg6o=Z@mail.gmail.com>
2011-01-25  9:00     ` Ravi Pinjala
2011-01-25 19:15       ` Samuel Just

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.