All of lore.kernel.org
 help / color / mirror / Atom feed
* osd recovery extremely slow with current master
@ 2012-11-13 20:33 Stefan Priebe
  2012-11-19 23:21 ` Gregory Farnum
  0 siblings, 1 reply; 4+ messages in thread
From: Stefan Priebe @ 2012-11-13 20:33 UTC (permalink / raw)
  To: ceph-devel

Hi list,

osd recovery seems to be really slow with current master.

I see only 1-8 active+recovering out of 1200. Even there's no load on 
ceph cluster.

Greets,
Stefan

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: osd recovery extremely slow with current master
  2012-11-13 20:33 osd recovery extremely slow with current master Stefan Priebe
@ 2012-11-19 23:21 ` Gregory Farnum
  2012-11-23  9:16   ` Stefan Priebe - Profihost AG
  0 siblings, 1 reply; 4+ messages in thread
From: Gregory Farnum @ 2012-11-19 23:21 UTC (permalink / raw)
  To: Stefan Priebe; +Cc: ceph-devel

Which version was this on? There was some fairly significant work to
recovery done to introduce a reservation scheme and some other stuff
that might need some different defaults.
-Greg

On Tue, Nov 13, 2012 at 12:33 PM, Stefan Priebe <s.priebe@profihost.ag> wrote:
> Hi list,
>
> osd recovery seems to be really slow with current master.
>
> I see only 1-8 active+recovering out of 1200. Even there's no load on ceph
> cluster.
>
> Greets,
> Stefan
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: osd recovery extremely slow with current master
  2012-11-19 23:21 ` Gregory Farnum
@ 2012-11-23  9:16   ` Stefan Priebe - Profihost AG
  2012-12-04 22:27     ` Gregory Farnum
  0 siblings, 1 reply; 4+ messages in thread
From: Stefan Priebe - Profihost AG @ 2012-11-23  9:16 UTC (permalink / raw)
  To: Gregory Farnum; +Cc: ceph-devel

It is with current next from today right now.

It just prints this line:
2012-11-23 10:15:29.927754 mon.0 [INF] pgmap v89614: 7632 pgs: 5956 
active+clean, 446 active+remapped+wait_backfill, 540 
active+degraded+wait_backfill, 690 
active+degraded+remapped+wait_backfill; 0 bytes data, 2827 MB used, 4461 
GB / 4464 GB avail; 1/3 degraded (33.333%)

And there is no I/O or CPU load on any machine. And it tooks hours to 
recover with 0 bytes data (deleted all images before trying this again).

Greets,
Stefan

Am 20.11.2012 00:21, schrieb Gregory Farnum:
> Which version was this on? There was some fairly significant work to
> recovery done to introduce a reservation scheme and some other stuff
> that might need some different defaults.
> -Greg
>
> On Tue, Nov 13, 2012 at 12:33 PM, Stefan Priebe <s.priebe@profihost.ag> wrote:
>> Hi list,
>>
>> osd recovery seems to be really slow with current master.
>>
>> I see only 1-8 active+recovering out of 1200. Even there's no load on ceph
>> cluster.
>>
>> Greets,
>> Stefan
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: osd recovery extremely slow with current master
  2012-11-23  9:16   ` Stefan Priebe - Profihost AG
@ 2012-12-04 22:27     ` Gregory Farnum
  0 siblings, 0 replies; 4+ messages in thread
From: Gregory Farnum @ 2012-12-04 22:27 UTC (permalink / raw)
  To: Stefan Priebe - Profihost AG; +Cc: ceph-devel

Yeah, I checked with Sam and you probably want to increase the "osd
max backfill" option to make it go faster in the future — this option
limits the number of PGs an OSD will be sending or receiving a
backfill for. The default is currently set to 5, and should probably
be much higher.
-Greg

On Fri, Nov 23, 2012 at 1:16 AM, Stefan Priebe - Profihost AG
<s.priebe@profihost.ag> wrote:
> It is with current next from today right now.
>
> It just prints this line:
> 2012-11-23 10:15:29.927754 mon.0 [INF] pgmap v89614: 7632 pgs: 5956
> active+clean, 446 active+remapped+wait_backfill, 540
> active+degraded+wait_backfill, 690 active+degraded+remapped+wait_backfill; 0
> bytes data, 2827 MB used, 4461 GB / 4464 GB avail; 1/3 degraded (33.333%)
>
> And there is no I/O or CPU load on any machine. And it tooks hours to
> recover with 0 bytes data (deleted all images before trying this again).
>
> Greets,
> Stefan
>
> Am 20.11.2012 00:21, schrieb Gregory Farnum:
>
>> Which version was this on? There was some fairly significant work to
>> recovery done to introduce a reservation scheme and some other stuff
>> that might need some different defaults.
>> -Greg
>>
>> On Tue, Nov 13, 2012 at 12:33 PM, Stefan Priebe <s.priebe@profihost.ag>
>> wrote:
>>>
>>> Hi list,
>>>
>>> osd recovery seems to be really slow with current master.
>>>
>>> I see only 1-8 active+recovering out of 1200. Even there's no load on
>>> ceph
>>> cluster.
>>>
>>> Greets,
>>> Stefan
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2012-12-04 22:27 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-11-13 20:33 osd recovery extremely slow with current master Stefan Priebe
2012-11-19 23:21 ` Gregory Farnum
2012-11-23  9:16   ` Stefan Priebe - Profihost AG
2012-12-04 22:27     ` Gregory Farnum

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.