All of lore.kernel.org
 help / color / mirror / Atom feed
* OSD suicide after being down/in for one day as it needs to search large amount of objects
@ 2014-08-19  6:30 Guang Yang
  2014-08-19 22:09 ` Gregory Farnum
  0 siblings, 1 reply; 5+ messages in thread
From: Guang Yang @ 2014-08-19  6:30 UTC (permalink / raw)
  To: Ceph-devel; +Cc: david.z1003

Hi ceph-devel,
David (cc’ed) reported a bug (http://tracker.ceph.com/issues/9128) which we came across in our test cluster during our failure testing, basically the way to reproduce it was to leave one OSD daemon down and in for a day, at the same time, keep giving write traffic. When the OSD daemon was started again, it hit suicide timeout and kill itself.

After some analysis (details in the bug), David found that the op thread was busy searching for missing objects and once the volume to search increase, the thread is expected to work that long time, please refer to the bug for detailed logs.

One simple fix is to let the op thread reset the suicide timeout periodically when it is doing long-time work, other fix might be to cut the work into smaller pieces?

Any suggestion is welcome.

Thanks,
Guang--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: OSD suicide after being down/in for one day as it needs to search large amount of objects
  2014-08-19  6:30 OSD suicide after being down/in for one day as it needs to search large amount of objects Guang Yang
@ 2014-08-19 22:09 ` Gregory Farnum
  2014-08-20 11:42   ` Guang Yang
  0 siblings, 1 reply; 5+ messages in thread
From: Gregory Farnum @ 2014-08-19 22:09 UTC (permalink / raw)
  To: Guang Yang; +Cc: Ceph-devel, david.z1003

On Mon, Aug 18, 2014 at 11:30 PM, Guang Yang <yguang11@outlook.com> wrote:
> Hi ceph-devel,
> David (cc’ed) reported a bug (http://tracker.ceph.com/issues/9128) which we came across in our test cluster during our failure testing, basically the way to reproduce it was to leave one OSD daemon down and in for a day, at the same time, keep giving write traffic. When the OSD daemon was started again, it hit suicide timeout and kill itself.
>
> After some analysis (details in the bug), David found that the op thread was busy searching for missing objects and once the volume to search increase, the thread is expected to work that long time, please refer to the bug for detailed logs.

Can you talk a little more about what's going on here? At a quick
naive glance, I'm not seeing why leaving an OSD down and in should
require work based on the amount of write traffic. Perhaps if the rest
of the cluster was changing mappings...?

>
> One simple fix is to let the op thread reset the suicide timeout periodically when it is doing long-time work, other fix might be to cut the work into smaller pieces?

We do both of those things throughout the OSD (although I think the
first is simpler and more common); search for the accesses to
cct->get_heartbeat_map()->reset_timeout.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: OSD suicide after being down/in for one day as it needs to search large amount of objects
  2014-08-19 22:09 ` Gregory Farnum
@ 2014-08-20 11:42   ` Guang Yang
  2014-08-20 15:19     ` Sage Weil
  0 siblings, 1 reply; 5+ messages in thread
From: Guang Yang @ 2014-08-20 11:42 UTC (permalink / raw)
  To: Gregory Farnum; +Cc: Ceph-devel, david.z1003

Thanks Greg.
On Aug 20, 2014, at 6:09 AM, Gregory Farnum <greg@inktank.com> wrote:

> On Mon, Aug 18, 2014 at 11:30 PM, Guang Yang <yguang11@outlook.com> wrote:
>> Hi ceph-devel,
>> David (cc’ed) reported a bug (http://tracker.ceph.com/issues/9128) which we came across in our test cluster during our failure testing, basically the way to reproduce it was to leave one OSD daemon down and in for a day, at the same time, keep giving write traffic. When the OSD daemon was started again, it hit suicide timeout and kill itself.
>> 
>> After some analysis (details in the bug), David found that the op thread was busy searching for missing objects and once the volume to search increase, the thread is expected to work that long time, please refer to the bug for detailed logs.
> 
> Can you talk a little more about what's going on here? At a quick
> naive glance, I'm not seeing why leaving an OSD down and in should
> require work based on the amount of write traffic. Perhaps if the rest
> of the cluster was changing mappings…?
We increased the down to out time interval from 5 minutes to 2 days to avoid migrating data back and forth which could increase latency, so that we target to mark OSD out manually. To achieve such, we are testing against some boundary cases to let the OSD down and in for like 1 day, however, when we try to bring it up again, it always failed due to hit the suicide timeout.
> 
>> 
>> One simple fix is to let the op thread reset the suicide timeout periodically when it is doing long-time work, other fix might be to cut the work into smaller pieces?
> 
> We do both of those things throughout the OSD (although I think the
> first is simpler and more common); search for the accesses to
> cct->get_heartbeat_map()->reset_timeout.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
> 

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: OSD suicide after being down/in for one day as it needs to search large amount of objects
  2014-08-20 11:42   ` Guang Yang
@ 2014-08-20 15:19     ` Sage Weil
  2014-08-21  1:42       ` Guang Yang
  0 siblings, 1 reply; 5+ messages in thread
From: Sage Weil @ 2014-08-20 15:19 UTC (permalink / raw)
  To: Guang Yang; +Cc: Gregory Farnum, Ceph-devel, david.z1003

On Wed, 20 Aug 2014, Guang Yang wrote:
> Thanks Greg.
> On Aug 20, 2014, at 6:09 AM, Gregory Farnum <greg@inktank.com> wrote:
> 
> > On Mon, Aug 18, 2014 at 11:30 PM, Guang Yang <yguang11@outlook.com> wrote:
> >> Hi ceph-devel,
> >> David (cc?ed) reported a bug (http://tracker.ceph.com/issues/9128) which we came across in our test cluster during our failure testing, basically the way to reproduce it was to leave one OSD daemon down and in for a day, at the same time, keep giving write traffic. When the OSD daemon was started again, it hit suicide timeout and kill itself.
> >> 
> >> After some analysis (details in the bug), David found that the op thread was busy searching for missing objects and once the volume to search increase, the thread is expected to work that long time, please refer to the bug for detailed logs.
> > 
> > Can you talk a little more about what's going on here? At a quick
> > naive glance, I'm not seeing why leaving an OSD down and in should
> > require work based on the amount of write traffic. Perhaps if the rest
> > of the cluster was changing mappings??
> We increased the down to out time interval from 5 minutes to 2 days to 
> avoid migrating data back and forth which could increase latency, so 
> that we target to mark OSD out manually. To achieve such, we are testing 
> against some boundary cases to let the OSD down and in for like 1 day, 
> however, when we try to bring it up again, it always failed due to hit 
> the suicide timeout.

Looking at the log snippet I see the PG had log range

	5481'28667,5646'34066

Which is ~5500 log events.  The default max is 10k.  search_for_missing is 
basically going to iterate over this list and check if the object is 
present locally.

If that's slow enough to trigger a suicide (which it seems to be), teh 
fix is simple: as Greg says we just need to make it probe the internel 
heartbeat code to indicate progress.  In most contexts this is done by 
passing a ThreadPool::TPHandle &handle into each method and then 
calling handle.reset_tp_timeout() on each iteration.  The same needs to be 
done for search_for_missing...

sage


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: OSD suicide after being down/in for one day as it needs to search large amount of objects
  2014-08-20 15:19     ` Sage Weil
@ 2014-08-21  1:42       ` Guang Yang
  0 siblings, 0 replies; 5+ messages in thread
From: Guang Yang @ 2014-08-21  1:42 UTC (permalink / raw)
  To: Sage Weil; +Cc: Gregory Farnum, Ceph-devel, david.z1003

Thanks Sage. We will provide a patch based on this.

Thanks,
Guang

On Aug 20, 2014, at 11:19 PM, Sage Weil <sweil@redhat.com> wrote:

> On Wed, 20 Aug 2014, Guang Yang wrote:
>> Thanks Greg.
>> On Aug 20, 2014, at 6:09 AM, Gregory Farnum <greg@inktank.com> wrote:
>> 
>>> On Mon, Aug 18, 2014 at 11:30 PM, Guang Yang <yguang11@outlook.com> wrote:
>>>> Hi ceph-devel,
>>>> David (cc?ed) reported a bug (http://tracker.ceph.com/issues/9128) which we came across in our test cluster during our failure testing, basically the way to reproduce it was to leave one OSD daemon down and in for a day, at the same time, keep giving write traffic. When the OSD daemon was started again, it hit suicide timeout and kill itself.
>>>> 
>>>> After some analysis (details in the bug), David found that the op thread was busy searching for missing objects and once the volume to search increase, the thread is expected to work that long time, please refer to the bug for detailed logs.
>>> 
>>> Can you talk a little more about what's going on here? At a quick
>>> naive glance, I'm not seeing why leaving an OSD down and in should
>>> require work based on the amount of write traffic. Perhaps if the rest
>>> of the cluster was changing mappings??
>> We increased the down to out time interval from 5 minutes to 2 days to 
>> avoid migrating data back and forth which could increase latency, so 
>> that we target to mark OSD out manually. To achieve such, we are testing 
>> against some boundary cases to let the OSD down and in for like 1 day, 
>> however, when we try to bring it up again, it always failed due to hit 
>> the suicide timeout.
> 
> Looking at the log snippet I see the PG had log range
> 
> 	5481'28667,5646'34066
> 
> Which is ~5500 log events.  The default max is 10k.  search_for_missing is 
> basically going to iterate over this list and check if the object is 
> present locally.
> 
> If that's slow enough to trigger a suicide (which it seems to be), teh 
> fix is simple: as Greg says we just need to make it probe the internel 
> heartbeat code to indicate progress.  In most contexts this is done by 
> passing a ThreadPool::TPHandle &handle into each method and then 
> calling handle.reset_tp_timeout() on each iteration.  The same needs to be 
> done for search_for_missing...
> 
> sage
> 
> 


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2014-08-21  1:42 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-19  6:30 OSD suicide after being down/in for one day as it needs to search large amount of objects Guang Yang
2014-08-19 22:09 ` Gregory Farnum
2014-08-20 11:42   ` Guang Yang
2014-08-20 15:19     ` Sage Weil
2014-08-21  1:42       ` Guang Yang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.