All of lore.kernel.org
 help / color / mirror / Atom feed
* MDS stuck in a crash loop
@ 2015-10-11 17:09 Milosz Tanski
  2015-10-11 17:16 ` Gregory Farnum
  0 siblings, 1 reply; 19+ messages in thread
From: Milosz Tanski @ 2015-10-11 17:09 UTC (permalink / raw)
  To: ceph-devel

About an hour ago my MDSs (primary and follower) started ping-pong
crashing with this message. I've spent about 30 minutes looking into
it but nothing yet.

This is from a 0.94.3 MDS

   -25> 2015-10-11 17:01:23.585220 7fd4f1fa4700  1 --
10.0.5.31:6802/2743 --> 10.0.5.57:6804/32341 --
osd_op(mds.0.3800:90496 300.0000e19b [write 2834681~1214] 1.99e72fa5
ondisk+write+known_if_redirected e21243) v5 -- ?+0 0x3092aa00 con
0x4b3a3c0
   -24> 2015-10-11 17:01:23.585258 7fd4f1fa4700  5 mds.0.log
_submit_thread 242244863415~1194 : EUpdate scatter_writebehind
[metablob 100014affd5, 2 dirs]
   -23> 2015-10-11 17:01:23.585291 7fd4f1fa4700  1 --
10.0.5.31:6802/2743 --> 10.0.5.57:6804/32341 --
osd_op(mds.0.3800:90497 300.0000e19b [write 2835895~1214] 1.99e72fa5
ondisk+write+known_if_redirected e21243) v5 -- ?+0 0x3092a780 con
0x4b3a3c0
   -22> 2015-10-11 17:01:23.585329 7fd4f1fa4700  5 mds.0.log
_submit_thread 242244864629~1194 : EUpdate scatter_writebehind
[metablob 100014b61f8, 2 dirs]
   -21> 2015-10-11 17:01:23.585363 7fd4f1fa4700  1 --
10.0.5.31:6802/2743 --> 10.0.5.57:6804/32341 --
osd_op(mds.0.3800:90498 300.0000e19b [write 2837109~1214] 1.99e72fa5
ondisk+write+known_if_redirected e21243) v5 -- ?+0 0x3092a500 con
0x4b3a3c0
   -20> 2015-10-11 17:01:23.585401 7fd4f1fa4700  5 mds.0.log
_submit_thread 242244865843~1194 : EUpdate scatter_writebehind
[metablob 100014b6b17, 2 dirs]
   -19> 2015-10-11 17:01:23.585435 7fd4f1fa4700  1 --
10.0.5.31:6802/2743 --> 10.0.5.57:6804/32341 --
osd_op(mds.0.3800:90499 300.0000e19b [write 2838323~1214] 1.99e72fa5
ondisk+write+known_if_redirected e21243) v5 -- ?+0 0x3092a280 con
0x4b3a3c0
   -18> 2015-10-11 17:01:23.585473 7fd4f1fa4700  5 mds.0.log
_submit_thread 242244867057~1194 : EUpdate scatter_writebehind
[metablob 100014ed078, 2 dirs]
   -17> 2015-10-11 17:01:23.585507 7fd4f1fa4700  1 --
10.0.5.31:6802/2743 --> 10.0.5.57:6804/32341 --
osd_op(mds.0.3800:90500 300.0000e19b [write 2839537~1214] 1.99e72fa5
ondisk+write+known_if_redirected e21243) v5 -- ?+0 0x3092a000 con
0x4b3a3c0
   -16> 2015-10-11 17:01:23.585547 7fd4f1fa4700  5 mds.0.log
_submit_thread 242244868271~1194 : EUpdate scatter_writebehind
[metablob 100014afa63, 2 dirs]
   -15> 2015-10-11 17:01:23.585581 7fd4f1fa4700  1 --
10.0.5.31:6802/2743 --> 10.0.5.57:6804/32341 --
osd_op(mds.0.3800:90501 300.0000e19b [write 2840751~1214] 1.99e72fa5
ondisk+write+known_if_redirected e21243) v5 -- ?+0 0x46fc3c00 con
0x4b3a3c0
   -14> 2015-10-11 17:01:23.585622 7fd4f1fa4700  5 mds.0.log
_submit_thread 242244869485~1194 : EUpdate scatter_writebehind
[metablob 100014b1d83, 2 dirs]
   -13> 2015-10-11 17:01:23.585661 7fd4f1fa4700  1 --
10.0.5.31:6802/2743 --> 10.0.5.57:6804/32341 --
osd_op(mds.0.3800:90502 300.0000e19b [write 2841965~1214] 1.99e72fa5
ondisk+write+known_if_redirected e21243) v5 -- ?+0 0x46fc3980 con
0x4b3a3c0
   -12> 2015-10-11 17:01:23.585702 7fd4f1fa4700  5 mds.0.log
_submit_thread 242244870699~1194 : EUpdate scatter_writebehind
[metablob 100014b2792, 2 dirs]
   -11> 2015-10-11 17:01:23.585736 7fd4f1fa4700  1 --
10.0.5.31:6802/2743 --> 10.0.5.57:6804/32341 --
osd_op(mds.0.3800:90503 300.0000e19b [write 2843179~1214] 1.99e72fa5
ondisk+write+known_if_redirected e21243) v5 -- ?+0 0x46fc3700 con
0x4b3a3c0
   -10> 2015-10-11 17:01:23.585775 7fd4f1fa4700  5 mds.0.log
_submit_thread 242244871913~1194 : EUpdate scatter_writebehind
[metablob 100015e4b10, 2 dirs]
    -9> 2015-10-11 17:01:23.585807 7fd4f1fa4700  1 --
10.0.5.31:6802/2743 --> 10.0.5.57:6804/32341 --
osd_op(mds.0.3800:90504 300.0000e19b [write 2844393~1214] 1.99e72fa5
ondisk+write+known_if_redirected e21243) v5 -- ?+0 0x46fc3480 con
0x4b3a3c0
    -8> 2015-10-11 17:01:23.585847 7fd4f1fa4700  5 mds.0.log
_submit_thread 242244873127~1194 : EUpdate scatter_writebehind
[metablob 100016101d5, 2 dirs]
    -7> 2015-10-11 17:01:23.585883 7fd4f1fa4700  1 --
10.0.5.31:6802/2743 --> 10.0.5.57:6804/32341 --
osd_op(mds.0.3800:90505 300.0000e19b [write 2845607~1214] 1.99e72fa5
ondisk+write+known_if_redirected e21243) v5 -- ?+0 0x46fc3200 con
0x4b3a3c0
    -6> 2015-10-11 17:01:23.585923 7fd4f1fa4700  5 mds.0.log
_submit_thread 242244874341~1194 : EUpdate scatter_writebehind
[metablob 10000000001, 2 dirs]
    -5> 2015-10-11 17:01:23.585956 7fd4f1fa4700  1 --
10.0.5.31:6802/2743 --> 10.0.5.57:6804/32341 --
osd_op(mds.0.3800:90506 300.0000e19b [write 2846821~1214] 1.99e72fa5
ondisk+write+known_if_redirected e21243) v5 -- ?+0 0x46fc2f80 con
0x4b3a3c0
    -4> 2015-10-11 17:01:23.585996 7fd4f1fa4700  5 mds.0.log
_submit_thread 242244875555~1194 : EUpdate scatter_writebehind
[metablob 100015cb082, 2 dirs]
    -3> 2015-10-11 17:01:23.586029 7fd4f1fa4700  1 --
10.0.5.31:6802/2743 --> 10.0.5.57:6804/32341 --
osd_op(mds.0.3800:90507 300.0000e19b [write 2848035~1214] 1.99e72fa5
ondisk+write+known_if_redirected e21243) v5 -- ?+0 0x46fc2d00 con
0x4b3a3c0
    -2> 2015-10-11 17:01:23.590077 7fd4f1fa4700  5 mds.0.log
_submit_thread 242244876769~1194 : EUpdate scatter_writebehind
[metablob 100015cb8b1, 2 dirs]
    -1> 2015-10-11 17:01:23.590125 7fd4f1fa4700  1 --
10.0.5.31:6802/2743 --> 10.0.5.57:6804/32341 --
osd_op(mds.0.3800:90508 300.0000e19b [write 2849249~1214] 1.99e72fa5
ondisk+write+known_if_redirected e21243) v5 -- ?+0 0x46fc2a80 con
0x4b3a3c0
     0> 2015-10-11 17:01:23.596008 7fd4f52ad700 -1 mds/SessionMap.cc:
In function 'virtual void C_IO_SM_Save::finish(int)' thread
7fd4f52ad700 time 2015-10-11 17:01:23.594089
mds/SessionMap.cc: 120: FAILED assert(r == 0)

 ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x8b) [0x94cc1b]
 2: /usr/bin/ceph-mds() [0x7c7df1]
 3: (MDSIOContextBase::complete(int)+0x81) [0x7c83b1]
 4: (Finisher::finisher_thread_entry()+0x1a0) [0x87f490]
 5: (()+0x8182) [0x7fd4fd031182]
 6: (clone()+0x6d) [0x7fd4fb7a047d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is
needed to interpret this.


-- 
Milosz Tanski
CTO
16 East 34th Street, 15th floor
New York, NY 10016

p: 646-253-9055
e: milosz@adfin.com

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: MDS stuck in a crash loop
  2015-10-11 17:09 MDS stuck in a crash loop Milosz Tanski
@ 2015-10-11 17:16 ` Gregory Farnum
  2015-10-11 21:24   ` Milosz Tanski
  0 siblings, 1 reply; 19+ messages in thread
From: Gregory Farnum @ 2015-10-11 17:16 UTC (permalink / raw)
  To: Milosz Tanski; +Cc: ceph-devel

On Sun, Oct 11, 2015 at 10:09 AM, Milosz Tanski <milosz@adfin.com> wrote:
> About an hour ago my MDSs (primary and follower) started ping-pong
> crashing with this message. I've spent about 30 minutes looking into
> it but nothing yet.
>
> This is from a 0.94.3 MDS
>

>      0> 2015-10-11 17:01:23.596008 7fd4f52ad700 -1 mds/SessionMap.cc:
> In function 'virtual void C_IO_SM_Save::finish(int)' thread
> 7fd4f52ad700 time 2015-10-11 17:01:23.594089
> mds/SessionMap.cc: 120: FAILED assert(r == 0)

These "r == 0" asserts pretty much always mean that the MDS did did a
read or write to RADOS (the OSDs) and got an error of some kind back.
(Or in the case of the OSDs, access to the local filesystem returned
an error, etc.) I don't think these writes include any safety checks
which would let the MDS break it which means that probably the OSD is
actually returning an error — odd, but not impossible.

Notice that the assert happened in thread 7fd4f52ad700, and look for
the stuff in that thread. You should be able to find an OSD op reply
(on the SessionMap object) coming in and reporting an error code.
-Greg

>
>  ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
>  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
> const*)+0x8b) [0x94cc1b]
>  2: /usr/bin/ceph-mds() [0x7c7df1]
>  3: (MDSIOContextBase::complete(int)+0x81) [0x7c83b1]
>  4: (Finisher::finisher_thread_entry()+0x1a0) [0x87f490]
>  5: (()+0x8182) [0x7fd4fd031182]
>  6: (clone()+0x6d) [0x7fd4fb7a047d]
>  NOTE: a copy of the executable, or `objdump -rdS <executable>` is
> needed to interpret this.
>
>
> --
> Milosz Tanski
> CTO
> 16 East 34th Street, 15th floor
> New York, NY 10016
>
> p: 646-253-9055
> e: milosz@adfin.com
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: MDS stuck in a crash loop
  2015-10-11 17:16 ` Gregory Farnum
@ 2015-10-11 21:24   ` Milosz Tanski
  2015-10-11 21:33     ` Milosz Tanski
  0 siblings, 1 reply; 19+ messages in thread
From: Milosz Tanski @ 2015-10-11 21:24 UTC (permalink / raw)
  To: Gregory Farnum; +Cc: ceph-devel

On Sun, Oct 11, 2015 at 1:16 PM, Gregory Farnum <gfarnum@redhat.com> wrote:
> On Sun, Oct 11, 2015 at 10:09 AM, Milosz Tanski <milosz@adfin.com> wrote:
>> About an hour ago my MDSs (primary and follower) started ping-pong
>> crashing with this message. I've spent about 30 minutes looking into
>> it but nothing yet.
>>
>> This is from a 0.94.3 MDS
>>
>
>>      0> 2015-10-11 17:01:23.596008 7fd4f52ad700 -1 mds/SessionMap.cc:
>> In function 'virtual void C_IO_SM_Save::finish(int)' thread
>> 7fd4f52ad700 time 2015-10-11 17:01:23.594089
>> mds/SessionMap.cc: 120: FAILED assert(r == 0)
>
> These "r == 0" asserts pretty much always mean that the MDS did did a
> read or write to RADOS (the OSDs) and got an error of some kind back.
> (Or in the case of the OSDs, access to the local filesystem returned
> an error, etc.) I don't think these writes include any safety checks
> which would let the MDS break it which means that probably the OSD is
> actually returning an error — odd, but not impossible.
>
> Notice that the assert happened in thread 7fd4f52ad700, and look for
> the stuff in that thread. You should be able to find an OSD op reply
> (on the SessionMap object) coming in and reporting an error code.
> -Greg

I only two error ops in that whole MDS session. Neither one happened
on the same thread (7f5ab6000700 in this file). But it looks like the
only session map is the -90 "Message too long" one.

mtanski@tiny:~$ cat single_crash.log | grep 'osd_op_reply' | grep -v
'ondisk = 0'
 -3946> 2015-10-11 20:51:11.013965 7f5ab20f2700  1 --
10.0.5.31:6802/27121 <== osd.25 10.0.5.57:6804/32341 6163 ====
osd_op_reply(46349 mds0_sessionmap [writefull 0~95168363] v0'0 uv0
ondisk = -90 ((90) Message too long)) v6 ==== 182+0+0 (2955408122 0 0)
0x3a55d340 con 0x3d5a3c0
  -705> 2015-10-11 20:51:11.374132 7f5ab22f4700  1 --
10.0.5.31:6802/27121 <== osd.28 10.0.5.50:6801/1787 5297 ====
osd_op_reply(48004 300.0000e274 [delete] v0'0 uv1349638 ondisk = -2
((2) No such file or directory)) v6 ==== 179+0+0 (1182549251 0 0)
0x66c5c80 con 0x3d5a7e0

Any idea what this could be Greg?

>
>>
>>  ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
>>  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
>> const*)+0x8b) [0x94cc1b]
>>  2: /usr/bin/ceph-mds() [0x7c7df1]
>>  3: (MDSIOContextBase::complete(int)+0x81) [0x7c83b1]
>>  4: (Finisher::finisher_thread_entry()+0x1a0) [0x87f490]
>>  5: (()+0x8182) [0x7fd4fd031182]
>>  6: (clone()+0x6d) [0x7fd4fb7a047d]
>>  NOTE: a copy of the executable, or `objdump -rdS <executable>` is
>> needed to interpret this.

-- 
Milosz Tanski
CTO
16 East 34th Street, 15th floor
New York, NY 10016

p: 646-253-9055
e: milosz@adfin.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: MDS stuck in a crash loop
  2015-10-11 21:24   ` Milosz Tanski
@ 2015-10-11 21:33     ` Milosz Tanski
  2015-10-11 22:01       ` Milosz Tanski
  0 siblings, 1 reply; 19+ messages in thread
From: Milosz Tanski @ 2015-10-11 21:33 UTC (permalink / raw)
  To: Gregory Farnum; +Cc: ceph-devel

On Sun, Oct 11, 2015 at 5:24 PM, Milosz Tanski <milosz@adfin.com> wrote:
> On Sun, Oct 11, 2015 at 1:16 PM, Gregory Farnum <gfarnum@redhat.com> wrote:
>> On Sun, Oct 11, 2015 at 10:09 AM, Milosz Tanski <milosz@adfin.com> wrote:
>>> About an hour ago my MDSs (primary and follower) started ping-pong
>>> crashing with this message. I've spent about 30 minutes looking into
>>> it but nothing yet.
>>>
>>> This is from a 0.94.3 MDS
>>>
>>
>>>      0> 2015-10-11 17:01:23.596008 7fd4f52ad700 -1 mds/SessionMap.cc:
>>> In function 'virtual void C_IO_SM_Save::finish(int)' thread
>>> 7fd4f52ad700 time 2015-10-11 17:01:23.594089
>>> mds/SessionMap.cc: 120: FAILED assert(r == 0)
>>
>> These "r == 0" asserts pretty much always mean that the MDS did did a
>> read or write to RADOS (the OSDs) and got an error of some kind back.
>> (Or in the case of the OSDs, access to the local filesystem returned
>> an error, etc.) I don't think these writes include any safety checks
>> which would let the MDS break it which means that probably the OSD is
>> actually returning an error — odd, but not impossible.
>>
>> Notice that the assert happened in thread 7fd4f52ad700, and look for
>> the stuff in that thread. You should be able to find an OSD op reply
>> (on the SessionMap object) coming in and reporting an error code.
>> -Greg
>
> I only two error ops in that whole MDS session. Neither one happened
> on the same thread (7f5ab6000700 in this file). But it looks like the
> only session map is the -90 "Message too long" one.
>
> mtanski@tiny:~$ cat single_crash.log | grep 'osd_op_reply' | grep -v
> 'ondisk = 0'
>  -3946> 2015-10-11 20:51:11.013965 7f5ab20f2700  1 --
> 10.0.5.31:6802/27121 <== osd.25 10.0.5.57:6804/32341 6163 ====
> osd_op_reply(46349 mds0_sessionmap [writefull 0~95168363] v0'0 uv0
> ondisk = -90 ((90) Message too long)) v6 ==== 182+0+0 (2955408122 0 0)
> 0x3a55d340 con 0x3d5a3c0
>   -705> 2015-10-11 20:51:11.374132 7f5ab22f4700  1 --
> 10.0.5.31:6802/27121 <== osd.28 10.0.5.50:6801/1787 5297 ====
> osd_op_reply(48004 300.0000e274 [delete] v0'0 uv1349638 ondisk = -2
> ((2) No such file or directory)) v6 ==== 179+0+0 (1182549251 0 0)
> 0x66c5c80 con 0x3d5a7e0
>
> Any idea what this could be Greg?

To follow this up I found this ticket from 9 months ago:
http://tracker.ceph.com/issues/10449 In there Yan says:

"it's a kernel bug. hang request prevents mds from trimming
completed_requests in sessionmap. there is nothing to do with mds.
(maybe we should add some code to MDS to show warning when this bug
happens)"

When I was debugging this I saw an OSD (not cephfs client) operation
stuck for a long time along with the MDS error:

HEALTH_WARN 1 requests are blocked > 32 sec; 1 osds have slow
requests; mds cluster is degraded; mds0: Behind on trimming (709/30)
1 ops are blocked > 16777.2 sec
1 ops are blocked > 16777.2 sec on osd.28

I did eventually bounce the OSD in question and it hasn't become stuck
since, but the MDS is still eating it every time with the "Message too
long" error on the session map.

I'm not quite sure where to go from here.

>
>>
>>>
>>>  ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
>>>  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
>>> const*)+0x8b) [0x94cc1b]
>>>  2: /usr/bin/ceph-mds() [0x7c7df1]
>>>  3: (MDSIOContextBase::complete(int)+0x81) [0x7c83b1]
>>>  4: (Finisher::finisher_thread_entry()+0x1a0) [0x87f490]
>>>  5: (()+0x8182) [0x7fd4fd031182]
>>>  6: (clone()+0x6d) [0x7fd4fb7a047d]
>>>  NOTE: a copy of the executable, or `objdump -rdS <executable>` is
>>> needed to interpret this.
>
> --
> Milosz Tanski
> CTO
> 16 East 34th Street, 15th floor
> New York, NY 10016
>
> p: 646-253-9055
> e: milosz@adfin.com



-- 
Milosz Tanski
CTO
16 East 34th Street, 15th floor
New York, NY 10016

p: 646-253-9055
e: milosz@adfin.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: MDS stuck in a crash loop
  2015-10-11 21:33     ` Milosz Tanski
@ 2015-10-11 22:01       ` Milosz Tanski
  2015-10-11 22:44         ` Milosz Tanski
  0 siblings, 1 reply; 19+ messages in thread
From: Milosz Tanski @ 2015-10-11 22:01 UTC (permalink / raw)
  To: Gregory Farnum; +Cc: ceph-devel

On Sun, Oct 11, 2015 at 5:33 PM, Milosz Tanski <milosz@adfin.com> wrote:
> On Sun, Oct 11, 2015 at 5:24 PM, Milosz Tanski <milosz@adfin.com> wrote:
>> On Sun, Oct 11, 2015 at 1:16 PM, Gregory Farnum <gfarnum@redhat.com> wrote:
>>> On Sun, Oct 11, 2015 at 10:09 AM, Milosz Tanski <milosz@adfin.com> wrote:
>>>> About an hour ago my MDSs (primary and follower) started ping-pong
>>>> crashing with this message. I've spent about 30 minutes looking into
>>>> it but nothing yet.
>>>>
>>>> This is from a 0.94.3 MDS
>>>>
>>>
>>>>      0> 2015-10-11 17:01:23.596008 7fd4f52ad700 -1 mds/SessionMap.cc:
>>>> In function 'virtual void C_IO_SM_Save::finish(int)' thread
>>>> 7fd4f52ad700 time 2015-10-11 17:01:23.594089
>>>> mds/SessionMap.cc: 120: FAILED assert(r == 0)
>>>
>>> These "r == 0" asserts pretty much always mean that the MDS did did a
>>> read or write to RADOS (the OSDs) and got an error of some kind back.
>>> (Or in the case of the OSDs, access to the local filesystem returned
>>> an error, etc.) I don't think these writes include any safety checks
>>> which would let the MDS break it which means that probably the OSD is
>>> actually returning an error — odd, but not impossible.
>>>
>>> Notice that the assert happened in thread 7fd4f52ad700, and look for
>>> the stuff in that thread. You should be able to find an OSD op reply
>>> (on the SessionMap object) coming in and reporting an error code.
>>> -Greg
>>
>> I only two error ops in that whole MDS session. Neither one happened
>> on the same thread (7f5ab6000700 in this file). But it looks like the
>> only session map is the -90 "Message too long" one.
>>
>> mtanski@tiny:~$ cat single_crash.log | grep 'osd_op_reply' | grep -v
>> 'ondisk = 0'
>>  -3946> 2015-10-11 20:51:11.013965 7f5ab20f2700  1 --
>> 10.0.5.31:6802/27121 <== osd.25 10.0.5.57:6804/32341 6163 ====
>> osd_op_reply(46349 mds0_sessionmap [writefull 0~95168363] v0'0 uv0
>> ondisk = -90 ((90) Message too long)) v6 ==== 182+0+0 (2955408122 0 0)
>> 0x3a55d340 con 0x3d5a3c0
>>   -705> 2015-10-11 20:51:11.374132 7f5ab22f4700  1 --
>> 10.0.5.31:6802/27121 <== osd.28 10.0.5.50:6801/1787 5297 ====
>> osd_op_reply(48004 300.0000e274 [delete] v0'0 uv1349638 ondisk = -2
>> ((2) No such file or directory)) v6 ==== 179+0+0 (1182549251 0 0)
>> 0x66c5c80 con 0x3d5a7e0
>>
>> Any idea what this could be Greg?
>
> To follow this up I found this ticket from 9 months ago:
> http://tracker.ceph.com/issues/10449 In there Yan says:
>
> "it's a kernel bug. hang request prevents mds from trimming
> completed_requests in sessionmap. there is nothing to do with mds.
> (maybe we should add some code to MDS to show warning when this bug
> happens)"
>
> When I was debugging this I saw an OSD (not cephfs client) operation
> stuck for a long time along with the MDS error:
>
> HEALTH_WARN 1 requests are blocked > 32 sec; 1 osds have slow
> requests; mds cluster is degraded; mds0: Behind on trimming (709/30)
> 1 ops are blocked > 16777.2 sec
> 1 ops are blocked > 16777.2 sec on osd.28
>
> I did eventually bounce the OSD in question and it hasn't become stuck
> since, but the MDS is still eating it every time with the "Message too
> long" error on the session map.
>
> I'm not quite sure where to go from here.

First time I had a chance to use the new recover tools. I was able to
reply the journal, reset it and then reset the sessionmap. MDS
returned back to life and so far everything looks good. Yay.

Triggering this a bug/issue is a pretty interesting set of steps.

>
>>
>>>
>>>>
>>>>  ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
>>>>  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
>>>> const*)+0x8b) [0x94cc1b]
>>>>  2: /usr/bin/ceph-mds() [0x7c7df1]
>>>>  3: (MDSIOContextBase::complete(int)+0x81) [0x7c83b1]
>>>>  4: (Finisher::finisher_thread_entry()+0x1a0) [0x87f490]
>>>>  5: (()+0x8182) [0x7fd4fd031182]
>>>>  6: (clone()+0x6d) [0x7fd4fb7a047d]
>>>>  NOTE: a copy of the executable, or `objdump -rdS <executable>` is
>>>> needed to interpret this.
>>
>> --
>> Milosz Tanski
>> CTO
>> 16 East 34th Street, 15th floor
>> New York, NY 10016
>>
>> p: 646-253-9055
>> e: milosz@adfin.com
>
>
>
> --
> Milosz Tanski
> CTO
> 16 East 34th Street, 15th floor
> New York, NY 10016
>
> p: 646-253-9055
> e: milosz@adfin.com



-- 
Milosz Tanski
CTO
16 East 34th Street, 15th floor
New York, NY 10016

p: 646-253-9055
e: milosz@adfin.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: MDS stuck in a crash loop
  2015-10-11 22:01       ` Milosz Tanski
@ 2015-10-11 22:44         ` Milosz Tanski
  2015-10-12  2:36           ` Milosz Tanski
  0 siblings, 1 reply; 19+ messages in thread
From: Milosz Tanski @ 2015-10-11 22:44 UTC (permalink / raw)
  To: Gregory Farnum; +Cc: ceph-devel

On Sun, Oct 11, 2015 at 6:01 PM, Milosz Tanski <milosz@adfin.com> wrote:
> On Sun, Oct 11, 2015 at 5:33 PM, Milosz Tanski <milosz@adfin.com> wrote:
>> On Sun, Oct 11, 2015 at 5:24 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>> On Sun, Oct 11, 2015 at 1:16 PM, Gregory Farnum <gfarnum@redhat.com> wrote:
>>>> On Sun, Oct 11, 2015 at 10:09 AM, Milosz Tanski <milosz@adfin.com> wrote:
>>>>> About an hour ago my MDSs (primary and follower) started ping-pong
>>>>> crashing with this message. I've spent about 30 minutes looking into
>>>>> it but nothing yet.
>>>>>
>>>>> This is from a 0.94.3 MDS
>>>>>
>>>>
>>>>>      0> 2015-10-11 17:01:23.596008 7fd4f52ad700 -1 mds/SessionMap.cc:
>>>>> In function 'virtual void C_IO_SM_Save::finish(int)' thread
>>>>> 7fd4f52ad700 time 2015-10-11 17:01:23.594089
>>>>> mds/SessionMap.cc: 120: FAILED assert(r == 0)
>>>>
>>>> These "r == 0" asserts pretty much always mean that the MDS did did a
>>>> read or write to RADOS (the OSDs) and got an error of some kind back.
>>>> (Or in the case of the OSDs, access to the local filesystem returned
>>>> an error, etc.) I don't think these writes include any safety checks
>>>> which would let the MDS break it which means that probably the OSD is
>>>> actually returning an error — odd, but not impossible.
>>>>
>>>> Notice that the assert happened in thread 7fd4f52ad700, and look for
>>>> the stuff in that thread. You should be able to find an OSD op reply
>>>> (on the SessionMap object) coming in and reporting an error code.
>>>> -Greg
>>>
>>> I only two error ops in that whole MDS session. Neither one happened
>>> on the same thread (7f5ab6000700 in this file). But it looks like the
>>> only session map is the -90 "Message too long" one.
>>>
>>> mtanski@tiny:~$ cat single_crash.log | grep 'osd_op_reply' | grep -v
>>> 'ondisk = 0'
>>>  -3946> 2015-10-11 20:51:11.013965 7f5ab20f2700  1 --
>>> 10.0.5.31:6802/27121 <== osd.25 10.0.5.57:6804/32341 6163 ====
>>> osd_op_reply(46349 mds0_sessionmap [writefull 0~95168363] v0'0 uv0
>>> ondisk = -90 ((90) Message too long)) v6 ==== 182+0+0 (2955408122 0 0)
>>> 0x3a55d340 con 0x3d5a3c0
>>>   -705> 2015-10-11 20:51:11.374132 7f5ab22f4700  1 --
>>> 10.0.5.31:6802/27121 <== osd.28 10.0.5.50:6801/1787 5297 ====
>>> osd_op_reply(48004 300.0000e274 [delete] v0'0 uv1349638 ondisk = -2
>>> ((2) No such file or directory)) v6 ==== 179+0+0 (1182549251 0 0)
>>> 0x66c5c80 con 0x3d5a7e0
>>>
>>> Any idea what this could be Greg?
>>
>> To follow this up I found this ticket from 9 months ago:
>> http://tracker.ceph.com/issues/10449 In there Yan says:
>>
>> "it's a kernel bug. hang request prevents mds from trimming
>> completed_requests in sessionmap. there is nothing to do with mds.
>> (maybe we should add some code to MDS to show warning when this bug
>> happens)"
>>
>> When I was debugging this I saw an OSD (not cephfs client) operation
>> stuck for a long time along with the MDS error:
>>
>> HEALTH_WARN 1 requests are blocked > 32 sec; 1 osds have slow
>> requests; mds cluster is degraded; mds0: Behind on trimming (709/30)
>> 1 ops are blocked > 16777.2 sec
>> 1 ops are blocked > 16777.2 sec on osd.28
>>
>> I did eventually bounce the OSD in question and it hasn't become stuck
>> since, but the MDS is still eating it every time with the "Message too
>> long" error on the session map.
>>
>> I'm not quite sure where to go from here.
>
> First time I had a chance to use the new recover tools. I was able to
> reply the journal, reset it and then reset the sessionmap. MDS
> returned back to life and so far everything looks good. Yay.
>
> Triggering this a bug/issue is a pretty interesting set of steps.

Spoke too soon, a missing dir is now causing MDS to restart it self.

    -6> 2015-10-11 22:40:47.300169 7f580c7b9700  5 -- op tracker --
seq: 4, time: 2015-10-11 22:40:47.300168, event: finishing request,
op: client_request(client.3597476:21480382 rmdir #100015e0be2/58
2015-10-11 21:34:49.224905 RETRY=36)
    -5> 2015-10-11 22:40:47.300208 7f580c7b9700  5 -- op tracker --
seq: 4, time: 2015-10-11 22:40:47.300208, event: cleaned up request,
op: client_request(client.3597476:21480382 rmdir #100015e0be2/58
2015-10-11 21:34:49.224905 RETRY=36)
    -4> 2015-10-11 22:40:47.300231 7f580c7b9700  5 -- op tracker --
seq: 4, time: 2015-10-11 22:40:47.300231, event: done, op:
client_request(client.3597476:21480382 rmdir #100015e0be2/58
2015-10-11 21:34:49.224905 RETRY=36)
    -3> 2015-10-11 22:40:47.300284 7f580e0bd700  0
mds.0.cache.dir(100048df076) _fetched missing object for [dir
100048df076 /petabucket/beta/6d/f6/ [2,head] auth v=0 cv=0/0 ap=1+0+0
state=1073741952 f() n() hs=0+0,ss=0+0 | waiter=1 authpin=1 0x6debcf8]
    -2> 2015-10-11 22:40:47.300313 7f580e0bd700 -1
log_channel(cluster) log [ERR] : dir 100048df076 object missing on
disk; some files may be lost
    -1> 2015-10-11 22:40:47.300549 7f580c7b9700  5 -- op tracker --
seq: 6, time: 2015-10-11 22:40:47.300549, event: acquired locks, op:
client_request(client.3710941:1805 mkdir
#100048df076/1dd4c66b916cf55f19beafc0574ac2dd923bc7efcc5b7757198f323e04d0.dl
2015-10-11 22:36:32.489705 RETRY=33)
     0> 2015-10-11 22:40:47.303770 7f580c7b9700 -1 mds/CDir.cc: In
function 'fnode_t* CDir::project_fnode()' thread 7f580c7b9700 time
2015-10-11 22:40:47.300581
mds/CDir.cc: 1267: FAILED assert(get_version() != 0)

 ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x8b) [0x94cc1b]
 2: (CDir::project_fnode()+0x2da) [0x75da1a]



Milosz Tanski
CTO
16 East 34th Street, 15th floor
New York, NY 10016

p: 646-253-9055
e: milosz@adfin.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: MDS stuck in a crash loop
  2015-10-11 22:44         ` Milosz Tanski
@ 2015-10-12  2:36           ` Milosz Tanski
  2015-10-14  4:46             ` Gregory Farnum
  2015-10-14 13:21             ` John Spray
  0 siblings, 2 replies; 19+ messages in thread
From: Milosz Tanski @ 2015-10-12  2:36 UTC (permalink / raw)
  To: Gregory Farnum; +Cc: ceph-devel

On Sun, Oct 11, 2015 at 6:44 PM, Milosz Tanski <milosz@adfin.com> wrote:
> On Sun, Oct 11, 2015 at 6:01 PM, Milosz Tanski <milosz@adfin.com> wrote:
>> On Sun, Oct 11, 2015 at 5:33 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>> On Sun, Oct 11, 2015 at 5:24 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>>> On Sun, Oct 11, 2015 at 1:16 PM, Gregory Farnum <gfarnum@redhat.com> wrote:
>>>>> On Sun, Oct 11, 2015 at 10:09 AM, Milosz Tanski <milosz@adfin.com> wrote:
>>>>>> About an hour ago my MDSs (primary and follower) started ping-pong
>>>>>> crashing with this message. I've spent about 30 minutes looking into
>>>>>> it but nothing yet.
>>>>>>
>>>>>> This is from a 0.94.3 MDS
>>>>>>
>>>>>
>>>>>>      0> 2015-10-11 17:01:23.596008 7fd4f52ad700 -1 mds/SessionMap.cc:
>>>>>> In function 'virtual void C_IO_SM_Save::finish(int)' thread
>>>>>> 7fd4f52ad700 time 2015-10-11 17:01:23.594089
>>>>>> mds/SessionMap.cc: 120: FAILED assert(r == 0)
>>>>>
>>>>> These "r == 0" asserts pretty much always mean that the MDS did did a
>>>>> read or write to RADOS (the OSDs) and got an error of some kind back.
>>>>> (Or in the case of the OSDs, access to the local filesystem returned
>>>>> an error, etc.) I don't think these writes include any safety checks
>>>>> which would let the MDS break it which means that probably the OSD is
>>>>> actually returning an error — odd, but not impossible.
>>>>>
>>>>> Notice that the assert happened in thread 7fd4f52ad700, and look for
>>>>> the stuff in that thread. You should be able to find an OSD op reply
>>>>> (on the SessionMap object) coming in and reporting an error code.
>>>>> -Greg
>>>>
>>>> I only two error ops in that whole MDS session. Neither one happened
>>>> on the same thread (7f5ab6000700 in this file). But it looks like the
>>>> only session map is the -90 "Message too long" one.
>>>>
>>>> mtanski@tiny:~$ cat single_crash.log | grep 'osd_op_reply' | grep -v
>>>> 'ondisk = 0'
>>>>  -3946> 2015-10-11 20:51:11.013965 7f5ab20f2700  1 --
>>>> 10.0.5.31:6802/27121 <== osd.25 10.0.5.57:6804/32341 6163 ====
>>>> osd_op_reply(46349 mds0_sessionmap [writefull 0~95168363] v0'0 uv0
>>>> ondisk = -90 ((90) Message too long)) v6 ==== 182+0+0 (2955408122 0 0)
>>>> 0x3a55d340 con 0x3d5a3c0
>>>>   -705> 2015-10-11 20:51:11.374132 7f5ab22f4700  1 --
>>>> 10.0.5.31:6802/27121 <== osd.28 10.0.5.50:6801/1787 5297 ====
>>>> osd_op_reply(48004 300.0000e274 [delete] v0'0 uv1349638 ondisk = -2
>>>> ((2) No such file or directory)) v6 ==== 179+0+0 (1182549251 0 0)
>>>> 0x66c5c80 con 0x3d5a7e0
>>>>
>>>> Any idea what this could be Greg?
>>>
>>> To follow this up I found this ticket from 9 months ago:
>>> http://tracker.ceph.com/issues/10449 In there Yan says:
>>>
>>> "it's a kernel bug. hang request prevents mds from trimming
>>> completed_requests in sessionmap. there is nothing to do with mds.
>>> (maybe we should add some code to MDS to show warning when this bug
>>> happens)"
>>>
>>> When I was debugging this I saw an OSD (not cephfs client) operation
>>> stuck for a long time along with the MDS error:
>>>
>>> HEALTH_WARN 1 requests are blocked > 32 sec; 1 osds have slow
>>> requests; mds cluster is degraded; mds0: Behind on trimming (709/30)
>>> 1 ops are blocked > 16777.2 sec
>>> 1 ops are blocked > 16777.2 sec on osd.28
>>>
>>> I did eventually bounce the OSD in question and it hasn't become stuck
>>> since, but the MDS is still eating it every time with the "Message too
>>> long" error on the session map.
>>>
>>> I'm not quite sure where to go from here.
>>
>> First time I had a chance to use the new recover tools. I was able to
>> reply the journal, reset it and then reset the sessionmap. MDS
>> returned back to life and so far everything looks good. Yay.
>>
>> Triggering this a bug/issue is a pretty interesting set of steps.
>
> Spoke too soon, a missing dir is now causing MDS to restart it self.
>
>     -6> 2015-10-11 22:40:47.300169 7f580c7b9700  5 -- op tracker --
> seq: 4, time: 2015-10-11 22:40:47.300168, event: finishing request,
> op: client_request(client.3597476:21480382 rmdir #100015e0be2/58
> 2015-10-11 21:34:49.224905 RETRY=36)
>     -5> 2015-10-11 22:40:47.300208 7f580c7b9700  5 -- op tracker --
> seq: 4, time: 2015-10-11 22:40:47.300208, event: cleaned up request,
> op: client_request(client.3597476:21480382 rmdir #100015e0be2/58
> 2015-10-11 21:34:49.224905 RETRY=36)
>     -4> 2015-10-11 22:40:47.300231 7f580c7b9700  5 -- op tracker --
> seq: 4, time: 2015-10-11 22:40:47.300231, event: done, op:
> client_request(client.3597476:21480382 rmdir #100015e0be2/58
> 2015-10-11 21:34:49.224905 RETRY=36)
>     -3> 2015-10-11 22:40:47.300284 7f580e0bd700  0
> mds.0.cache.dir(100048df076) _fetched missing object for [dir
> 100048df076 /petabucket/beta/6d/f6/ [2,head] auth v=0 cv=0/0 ap=1+0+0
> state=1073741952 f() n() hs=0+0,ss=0+0 | waiter=1 authpin=1 0x6debcf8]
>     -2> 2015-10-11 22:40:47.300313 7f580e0bd700 -1
> log_channel(cluster) log [ERR] : dir 100048df076 object missing on
> disk; some files may be lost
>     -1> 2015-10-11 22:40:47.300549 7f580c7b9700  5 -- op tracker --
> seq: 6, time: 2015-10-11 22:40:47.300549, event: acquired locks, op:
> client_request(client.3710941:1805 mkdir
> #100048df076/1dd4c66b916cf55f19beafc0574ac2dd923bc7efcc5b7757198f323e04d0.dl
> 2015-10-11 22:36:32.489705 RETRY=33)
>      0> 2015-10-11 22:40:47.303770 7f580c7b9700 -1 mds/CDir.cc: In
> function 'fnode_t* CDir::project_fnode()' thread 7f580c7b9700 time
> 2015-10-11 22:40:47.300581
> mds/CDir.cc: 1267: FAILED assert(get_version() != 0)
>
>  ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
>  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
> const*)+0x8b) [0x94cc1b]
>  2: (CDir::project_fnode()+0x2da) [0x75da1a]

Worked around this as well. Stopped the client that was trying to
access this dir (or delete it, not sure) and move it somewhere else
(because I can't delete it).

A series of steps that happened today (not sure if all are related) in
case the postmortem is interesting to anybody following at home.

1. One (of 5) monitors somehow gets OOM killed after eating up 10gigs of ram.
2. An OSD blocks on an IO request
3. This is when the MSD starts growing it's SessionMap to a place
where eventually it gets to big. This is related to bug:
http://tracker.ceph.com/issues/10449 (not the client bug, but the too
large Session)
4. MDS suicides and then two MDS machines start ping ponging in
recovery but always fail to save SessionMap.
5. I stop the MDS, recover the journal and reset the session map.
6. MDS come back to life after that.
7. Ceph loses one directory and as soon as the client tries to access
it the MDS also suicides. Standing bug:
http://tracker.ceph.com/issues/9880
8. Lame workaround (moving / renaming bad dir) so client doesn't access it.

-- 
Milosz Tanski
CTO
16 East 34th Street, 15th floor
New York, NY 10016

p: 646-253-9055
e: milosz@adfin.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: MDS stuck in a crash loop
  2015-10-12  2:36           ` Milosz Tanski
@ 2015-10-14  4:46             ` Gregory Farnum
  2015-10-19 15:31               ` Milosz Tanski
  2015-10-14 13:21             ` John Spray
  1 sibling, 1 reply; 19+ messages in thread
From: Gregory Farnum @ 2015-10-14  4:46 UTC (permalink / raw)
  To: Milosz Tanski; +Cc: ceph-devel

On Sun, Oct 11, 2015 at 7:36 PM, Milosz Tanski <milosz@adfin.com> wrote:
> On Sun, Oct 11, 2015 at 6:44 PM, Milosz Tanski <milosz@adfin.com> wrote:
>> On Sun, Oct 11, 2015 at 6:01 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>> On Sun, Oct 11, 2015 at 5:33 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>>> On Sun, Oct 11, 2015 at 5:24 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>>>> On Sun, Oct 11, 2015 at 1:16 PM, Gregory Farnum <gfarnum@redhat.com> wrote:
>>>>>> On Sun, Oct 11, 2015 at 10:09 AM, Milosz Tanski <milosz@adfin.com> wrote:
>>>>>>> About an hour ago my MDSs (primary and follower) started ping-pong
>>>>>>> crashing with this message. I've spent about 30 minutes looking into
>>>>>>> it but nothing yet.
>>>>>>>
>>>>>>> This is from a 0.94.3 MDS
>>>>>>>
>>>>>>
>>>>>>>      0> 2015-10-11 17:01:23.596008 7fd4f52ad700 -1 mds/SessionMap.cc:
>>>>>>> In function 'virtual void C_IO_SM_Save::finish(int)' thread
>>>>>>> 7fd4f52ad700 time 2015-10-11 17:01:23.594089
>>>>>>> mds/SessionMap.cc: 120: FAILED assert(r == 0)
>>>>>>
>>>>>> These "r == 0" asserts pretty much always mean that the MDS did did a
>>>>>> read or write to RADOS (the OSDs) and got an error of some kind back.
>>>>>> (Or in the case of the OSDs, access to the local filesystem returned
>>>>>> an error, etc.) I don't think these writes include any safety checks
>>>>>> which would let the MDS break it which means that probably the OSD is
>>>>>> actually returning an error — odd, but not impossible.
>>>>>>
>>>>>> Notice that the assert happened in thread 7fd4f52ad700, and look for
>>>>>> the stuff in that thread. You should be able to find an OSD op reply
>>>>>> (on the SessionMap object) coming in and reporting an error code.
>>>>>> -Greg
>>>>>
>>>>> I only two error ops in that whole MDS session. Neither one happened
>>>>> on the same thread (7f5ab6000700 in this file). But it looks like the
>>>>> only session map is the -90 "Message too long" one.
>>>>>
>>>>> mtanski@tiny:~$ cat single_crash.log | grep 'osd_op_reply' | grep -v
>>>>> 'ondisk = 0'
>>>>>  -3946> 2015-10-11 20:51:11.013965 7f5ab20f2700  1 --
>>>>> 10.0.5.31:6802/27121 <== osd.25 10.0.5.57:6804/32341 6163 ====
>>>>> osd_op_reply(46349 mds0_sessionmap [writefull 0~95168363] v0'0 uv0
>>>>> ondisk = -90 ((90) Message too long)) v6 ==== 182+0+0 (2955408122 0 0)
>>>>> 0x3a55d340 con 0x3d5a3c0
>>>>>   -705> 2015-10-11 20:51:11.374132 7f5ab22f4700  1 --
>>>>> 10.0.5.31:6802/27121 <== osd.28 10.0.5.50:6801/1787 5297 ====
>>>>> osd_op_reply(48004 300.0000e274 [delete] v0'0 uv1349638 ondisk = -2
>>>>> ((2) No such file or directory)) v6 ==== 179+0+0 (1182549251 0 0)
>>>>> 0x66c5c80 con 0x3d5a7e0
>>>>>
>>>>> Any idea what this could be Greg?
>>>>
>>>> To follow this up I found this ticket from 9 months ago:
>>>> http://tracker.ceph.com/issues/10449 In there Yan says:
>>>>
>>>> "it's a kernel bug. hang request prevents mds from trimming
>>>> completed_requests in sessionmap. there is nothing to do with mds.
>>>> (maybe we should add some code to MDS to show warning when this bug
>>>> happens)"
>>>>
>>>> When I was debugging this I saw an OSD (not cephfs client) operation
>>>> stuck for a long time along with the MDS error:
>>>>
>>>> HEALTH_WARN 1 requests are blocked > 32 sec; 1 osds have slow
>>>> requests; mds cluster is degraded; mds0: Behind on trimming (709/30)
>>>> 1 ops are blocked > 16777.2 sec
>>>> 1 ops are blocked > 16777.2 sec on osd.28
>>>>
>>>> I did eventually bounce the OSD in question and it hasn't become stuck
>>>> since, but the MDS is still eating it every time with the "Message too
>>>> long" error on the session map.
>>>>
>>>> I'm not quite sure where to go from here.
>>>
>>> First time I had a chance to use the new recover tools. I was able to
>>> reply the journal, reset it and then reset the sessionmap. MDS
>>> returned back to life and so far everything looks good. Yay.
>>>
>>> Triggering this a bug/issue is a pretty interesting set of steps.
>>
>> Spoke too soon, a missing dir is now causing MDS to restart it self.
>>
>>     -6> 2015-10-11 22:40:47.300169 7f580c7b9700  5 -- op tracker --
>> seq: 4, time: 2015-10-11 22:40:47.300168, event: finishing request,
>> op: client_request(client.3597476:21480382 rmdir #100015e0be2/58
>> 2015-10-11 21:34:49.224905 RETRY=36)
>>     -5> 2015-10-11 22:40:47.300208 7f580c7b9700  5 -- op tracker --
>> seq: 4, time: 2015-10-11 22:40:47.300208, event: cleaned up request,
>> op: client_request(client.3597476:21480382 rmdir #100015e0be2/58
>> 2015-10-11 21:34:49.224905 RETRY=36)
>>     -4> 2015-10-11 22:40:47.300231 7f580c7b9700  5 -- op tracker --
>> seq: 4, time: 2015-10-11 22:40:47.300231, event: done, op:
>> client_request(client.3597476:21480382 rmdir #100015e0be2/58
>> 2015-10-11 21:34:49.224905 RETRY=36)
>>     -3> 2015-10-11 22:40:47.300284 7f580e0bd700  0
>> mds.0.cache.dir(100048df076) _fetched missing object for [dir
>> 100048df076 /petabucket/beta/6d/f6/ [2,head] auth v=0 cv=0/0 ap=1+0+0
>> state=1073741952 f() n() hs=0+0,ss=0+0 | waiter=1 authpin=1 0x6debcf8]
>>     -2> 2015-10-11 22:40:47.300313 7f580e0bd700 -1
>> log_channel(cluster) log [ERR] : dir 100048df076 object missing on
>> disk; some files may be lost
>>     -1> 2015-10-11 22:40:47.300549 7f580c7b9700  5 -- op tracker --
>> seq: 6, time: 2015-10-11 22:40:47.300549, event: acquired locks, op:
>> client_request(client.3710941:1805 mkdir
>> #100048df076/1dd4c66b916cf55f19beafc0574ac2dd923bc7efcc5b7757198f323e04d0.dl
>> 2015-10-11 22:36:32.489705 RETRY=33)
>>      0> 2015-10-11 22:40:47.303770 7f580c7b9700 -1 mds/CDir.cc: In
>> function 'fnode_t* CDir::project_fnode()' thread 7f580c7b9700 time
>> 2015-10-11 22:40:47.300581
>> mds/CDir.cc: 1267: FAILED assert(get_version() != 0)
>>
>>  ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
>>  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
>> const*)+0x8b) [0x94cc1b]
>>  2: (CDir::project_fnode()+0x2da) [0x75da1a]
>
> Worked around this as well. Stopped the client that was trying to
> access this dir (or delete it, not sure) and move it somewhere else
> (because I can't delete it).
>
> A series of steps that happened today (not sure if all are related) in
> case the postmortem is interesting to anybody following at home.
>
> 1. One (of 5) monitors somehow gets OOM killed after eating up 10gigs of ram.
> 2. An OSD blocks on an IO request
> 3. This is when the MSD starts growing it's SessionMap to a place
> where eventually it gets to big. This is related to bug:
> http://tracker.ceph.com/issues/10449 (not the client bug, but the too
> large Session)
> 4. MDS suicides and then two MDS machines start ping ponging in
> recovery but always fail to save SessionMap.
> 5. I stop the MDS, recover the journal and reset the session map.

Exactly what did you do in this step? I have a suspicion you didn't
recover and flush out the journal all the way, and that's what caused
your directory to go bad.
-Greg

> 6. MDS come back to life after that.
> 7. Ceph loses one directory and as soon as the client tries to access
> it the MDS also suicides. Standing bug:
> http://tracker.ceph.com/issues/9880
> 8. Lame workaround (moving / renaming bad dir) so client doesn't access it.
>
> --
> Milosz Tanski
> CTO
> 16 East 34th Street, 15th floor
> New York, NY 10016
>
> p: 646-253-9055
> e: milosz@adfin.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: MDS stuck in a crash loop
  2015-10-12  2:36           ` Milosz Tanski
  2015-10-14  4:46             ` Gregory Farnum
@ 2015-10-14 13:21             ` John Spray
  2015-10-19 15:28               ` Milosz Tanski
  1 sibling, 1 reply; 19+ messages in thread
From: John Spray @ 2015-10-14 13:21 UTC (permalink / raw)
  To: Milosz Tanski; +Cc: Gregory Farnum, ceph-devel

On Mon, Oct 12, 2015 at 3:36 AM, Milosz Tanski <milosz@adfin.com> wrote:
> On Sun, Oct 11, 2015 at 6:44 PM, Milosz Tanski <milosz@adfin.com> wrote:
>> On Sun, Oct 11, 2015 at 6:01 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>> On Sun, Oct 11, 2015 at 5:33 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>>> On Sun, Oct 11, 2015 at 5:24 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>>>> On Sun, Oct 11, 2015 at 1:16 PM, Gregory Farnum <gfarnum@redhat.com> wrote:
>>>>>> On Sun, Oct 11, 2015 at 10:09 AM, Milosz Tanski <milosz@adfin.com> wrote:
>>>>>>> About an hour ago my MDSs (primary and follower) started ping-pong
>>>>>>> crashing with this message. I've spent about 30 minutes looking into
>>>>>>> it but nothing yet.
>>>>>>>
>>>>>>> This is from a 0.94.3 MDS
>>>>>>>
>>>>>>
>>>>>>>      0> 2015-10-11 17:01:23.596008 7fd4f52ad700 -1 mds/SessionMap.cc:
>>>>>>> In function 'virtual void C_IO_SM_Save::finish(int)' thread
>>>>>>> 7fd4f52ad700 time 2015-10-11 17:01:23.594089
>>>>>>> mds/SessionMap.cc: 120: FAILED assert(r == 0)
>>>>>>
>>>>>> These "r == 0" asserts pretty much always mean that the MDS did did a
>>>>>> read or write to RADOS (the OSDs) and got an error of some kind back.
>>>>>> (Or in the case of the OSDs, access to the local filesystem returned
>>>>>> an error, etc.) I don't think these writes include any safety checks
>>>>>> which would let the MDS break it which means that probably the OSD is
>>>>>> actually returning an error — odd, but not impossible.
>>>>>>
>>>>>> Notice that the assert happened in thread 7fd4f52ad700, and look for
>>>>>> the stuff in that thread. You should be able to find an OSD op reply
>>>>>> (on the SessionMap object) coming in and reporting an error code.
>>>>>> -Greg
>>>>>
>>>>> I only two error ops in that whole MDS session. Neither one happened
>>>>> on the same thread (7f5ab6000700 in this file). But it looks like the
>>>>> only session map is the -90 "Message too long" one.
>>>>>
>>>>> mtanski@tiny:~$ cat single_crash.log | grep 'osd_op_reply' | grep -v
>>>>> 'ondisk = 0'
>>>>>  -3946> 2015-10-11 20:51:11.013965 7f5ab20f2700  1 --
>>>>> 10.0.5.31:6802/27121 <== osd.25 10.0.5.57:6804/32341 6163 ====
>>>>> osd_op_reply(46349 mds0_sessionmap [writefull 0~95168363] v0'0 uv0
>>>>> ondisk = -90 ((90) Message too long)) v6 ==== 182+0+0 (2955408122 0 0)
>>>>> 0x3a55d340 con 0x3d5a3c0
>>>>>   -705> 2015-10-11 20:51:11.374132 7f5ab22f4700  1 --
>>>>> 10.0.5.31:6802/27121 <== osd.28 10.0.5.50:6801/1787 5297 ====
>>>>> osd_op_reply(48004 300.0000e274 [delete] v0'0 uv1349638 ondisk = -2
>>>>> ((2) No such file or directory)) v6 ==== 179+0+0 (1182549251 0 0)
>>>>> 0x66c5c80 con 0x3d5a7e0
>>>>>
>>>>> Any idea what this could be Greg?
>>>>
>>>> To follow this up I found this ticket from 9 months ago:
>>>> http://tracker.ceph.com/issues/10449 In there Yan says:
>>>>
>>>> "it's a kernel bug. hang request prevents mds from trimming
>>>> completed_requests in sessionmap. there is nothing to do with mds.
>>>> (maybe we should add some code to MDS to show warning when this bug
>>>> happens)"
>>>>
>>>> When I was debugging this I saw an OSD (not cephfs client) operation
>>>> stuck for a long time along with the MDS error:
>>>>
>>>> HEALTH_WARN 1 requests are blocked > 32 sec; 1 osds have slow
>>>> requests; mds cluster is degraded; mds0: Behind on trimming (709/30)
>>>> 1 ops are blocked > 16777.2 sec
>>>> 1 ops are blocked > 16777.2 sec on osd.28
>>>>
>>>> I did eventually bounce the OSD in question and it hasn't become stuck
>>>> since, but the MDS is still eating it every time with the "Message too
>>>> long" error on the session map.
>>>>
>>>> I'm not quite sure where to go from here.
>>>
>>> First time I had a chance to use the new recover tools. I was able to
>>> reply the journal, reset it and then reset the sessionmap. MDS
>>> returned back to life and so far everything looks good. Yay.
>>>
>>> Triggering this a bug/issue is a pretty interesting set of steps.
>>
>> Spoke too soon, a missing dir is now causing MDS to restart it self.
>>
>>     -6> 2015-10-11 22:40:47.300169 7f580c7b9700  5 -- op tracker --
>> seq: 4, time: 2015-10-11 22:40:47.300168, event: finishing request,
>> op: client_request(client.3597476:21480382 rmdir #100015e0be2/58
>> 2015-10-11 21:34:49.224905 RETRY=36)
>>     -5> 2015-10-11 22:40:47.300208 7f580c7b9700  5 -- op tracker --
>> seq: 4, time: 2015-10-11 22:40:47.300208, event: cleaned up request,
>> op: client_request(client.3597476:21480382 rmdir #100015e0be2/58
>> 2015-10-11 21:34:49.224905 RETRY=36)
>>     -4> 2015-10-11 22:40:47.300231 7f580c7b9700  5 -- op tracker --
>> seq: 4, time: 2015-10-11 22:40:47.300231, event: done, op:
>> client_request(client.3597476:21480382 rmdir #100015e0be2/58
>> 2015-10-11 21:34:49.224905 RETRY=36)
>>     -3> 2015-10-11 22:40:47.300284 7f580e0bd700  0
>> mds.0.cache.dir(100048df076) _fetched missing object for [dir
>> 100048df076 /petabucket/beta/6d/f6/ [2,head] auth v=0 cv=0/0 ap=1+0+0
>> state=1073741952 f() n() hs=0+0,ss=0+0 | waiter=1 authpin=1 0x6debcf8]
>>     -2> 2015-10-11 22:40:47.300313 7f580e0bd700 -1
>> log_channel(cluster) log [ERR] : dir 100048df076 object missing on
>> disk; some files may be lost
>>     -1> 2015-10-11 22:40:47.300549 7f580c7b9700  5 -- op tracker --
>> seq: 6, time: 2015-10-11 22:40:47.300549, event: acquired locks, op:
>> client_request(client.3710941:1805 mkdir
>> #100048df076/1dd4c66b916cf55f19beafc0574ac2dd923bc7efcc5b7757198f323e04d0.dl
>> 2015-10-11 22:36:32.489705 RETRY=33)
>>      0> 2015-10-11 22:40:47.303770 7f580c7b9700 -1 mds/CDir.cc: In
>> function 'fnode_t* CDir::project_fnode()' thread 7f580c7b9700 time
>> 2015-10-11 22:40:47.300581
>> mds/CDir.cc: 1267: FAILED assert(get_version() != 0)

Hmm, so something is creating a dentry that points to a dirfrag object
that doesn't really exist.  What was the series of client operations
that led to the directory (the one with inode number 0x100048df076)
being where it is now?  Was it created there, or moved there from
somewhere?

The way this is meant to work is that the journal we scavenge from
contains EMetaBlobs that include fnodes, and when we scavenge the
fnode, we write it into the dirfrag header, creating the dirfrag if it
doesn't exist.  The same mkdir that creates the dentry should also
contain the update to the fnode.

It would be really useful to get a copy of the journal at the point
right before you run "recover dentries".

Separately, I've been extending our journal repair test to see if I
could get a similar result, I haven't found a path that results in a
dentry-but-no-dirfrag case, but I can at least get the particular
assertion you're seeing, so I'll fix that in master.

John




>>
>>  ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
>>  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
>> const*)+0x8b) [0x94cc1b]
>>  2: (CDir::project_fnode()+0x2da) [0x75da1a]
>
> Worked around this as well. Stopped the client that was trying to
> access this dir (or delete it, not sure) and move it somewhere else
> (because I can't delete it).
>
> A series of steps that happened today (not sure if all are related) in
> case the postmortem is interesting to anybody following at home.
>
> 1. One (of 5) monitors somehow gets OOM killed after eating up 10gigs of ram.
> 2. An OSD blocks on an IO request
> 3. This is when the MSD starts growing it's SessionMap to a place
> where eventually it gets to big. This is related to bug:
> http://tracker.ceph.com/issues/10449 (not the client bug, but the too
> large Session)
> 4. MDS suicides and then two MDS machines start ping ponging in
> recovery but always fail to save SessionMap.
> 5. I stop the MDS, recover the journal and reset the session map.
> 6. MDS come back to life after that.
> 7. Ceph loses one directory and as soon as the client tries to access
> it the MDS also suicides. Standing bug:
> http://tracker.ceph.com/issues/9880
> 8. Lame workaround (moving / renaming bad dir) so client doesn't access it.
>
> --
> Milosz Tanski
> CTO
> 16 East 34th Street, 15th floor
> New York, NY 10016
>
> p: 646-253-9055
> e: milosz@adfin.com
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: MDS stuck in a crash loop
  2015-10-14 13:21             ` John Spray
@ 2015-10-19 15:28               ` Milosz Tanski
  0 siblings, 0 replies; 19+ messages in thread
From: Milosz Tanski @ 2015-10-19 15:28 UTC (permalink / raw)
  To: John Spray; +Cc: Gregory Farnum, ceph-devel

On Wed, Oct 14, 2015 at 9:21 AM, John Spray <jspray@redhat.com> wrote:
> On Mon, Oct 12, 2015 at 3:36 AM, Milosz Tanski <milosz@adfin.com> wrote:
>> On Sun, Oct 11, 2015 at 6:44 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>> On Sun, Oct 11, 2015 at 6:01 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>>> On Sun, Oct 11, 2015 at 5:33 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>>>> On Sun, Oct 11, 2015 at 5:24 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>>>>> On Sun, Oct 11, 2015 at 1:16 PM, Gregory Farnum <gfarnum@redhat.com> wrote:
>>>>>>> On Sun, Oct 11, 2015 at 10:09 AM, Milosz Tanski <milosz@adfin.com> wrote:
>>>>>>>> About an hour ago my MDSs (primary and follower) started ping-pong
>>>>>>>> crashing with this message. I've spent about 30 minutes looking into
>>>>>>>> it but nothing yet.
>>>>>>>>
>>>>>>>> This is from a 0.94.3 MDS
>>>>>>>>
>>>>>>>
>>>>>>>>      0> 2015-10-11 17:01:23.596008 7fd4f52ad700 -1 mds/SessionMap.cc:
>>>>>>>> In function 'virtual void C_IO_SM_Save::finish(int)' thread
>>>>>>>> 7fd4f52ad700 time 2015-10-11 17:01:23.594089
>>>>>>>> mds/SessionMap.cc: 120: FAILED assert(r == 0)
>>>>>>>
>>>>>>> These "r == 0" asserts pretty much always mean that the MDS did did a
>>>>>>> read or write to RADOS (the OSDs) and got an error of some kind back.
>>>>>>> (Or in the case of the OSDs, access to the local filesystem returned
>>>>>>> an error, etc.) I don't think these writes include any safety checks
>>>>>>> which would let the MDS break it which means that probably the OSD is
>>>>>>> actually returning an error — odd, but not impossible.
>>>>>>>
>>>>>>> Notice that the assert happened in thread 7fd4f52ad700, and look for
>>>>>>> the stuff in that thread. You should be able to find an OSD op reply
>>>>>>> (on the SessionMap object) coming in and reporting an error code.
>>>>>>> -Greg
>>>>>>
>>>>>> I only two error ops in that whole MDS session. Neither one happened
>>>>>> on the same thread (7f5ab6000700 in this file). But it looks like the
>>>>>> only session map is the -90 "Message too long" one.
>>>>>>
>>>>>> mtanski@tiny:~$ cat single_crash.log | grep 'osd_op_reply' | grep -v
>>>>>> 'ondisk = 0'
>>>>>>  -3946> 2015-10-11 20:51:11.013965 7f5ab20f2700  1 --
>>>>>> 10.0.5.31:6802/27121 <== osd.25 10.0.5.57:6804/32341 6163 ====
>>>>>> osd_op_reply(46349 mds0_sessionmap [writefull 0~95168363] v0'0 uv0
>>>>>> ondisk = -90 ((90) Message too long)) v6 ==== 182+0+0 (2955408122 0 0)
>>>>>> 0x3a55d340 con 0x3d5a3c0
>>>>>>   -705> 2015-10-11 20:51:11.374132 7f5ab22f4700  1 --
>>>>>> 10.0.5.31:6802/27121 <== osd.28 10.0.5.50:6801/1787 5297 ====
>>>>>> osd_op_reply(48004 300.0000e274 [delete] v0'0 uv1349638 ondisk = -2
>>>>>> ((2) No such file or directory)) v6 ==== 179+0+0 (1182549251 0 0)
>>>>>> 0x66c5c80 con 0x3d5a7e0
>>>>>>
>>>>>> Any idea what this could be Greg?
>>>>>
>>>>> To follow this up I found this ticket from 9 months ago:
>>>>> http://tracker.ceph.com/issues/10449 In there Yan says:
>>>>>
>>>>> "it's a kernel bug. hang request prevents mds from trimming
>>>>> completed_requests in sessionmap. there is nothing to do with mds.
>>>>> (maybe we should add some code to MDS to show warning when this bug
>>>>> happens)"
>>>>>
>>>>> When I was debugging this I saw an OSD (not cephfs client) operation
>>>>> stuck for a long time along with the MDS error:
>>>>>
>>>>> HEALTH_WARN 1 requests are blocked > 32 sec; 1 osds have slow
>>>>> requests; mds cluster is degraded; mds0: Behind on trimming (709/30)
>>>>> 1 ops are blocked > 16777.2 sec
>>>>> 1 ops are blocked > 16777.2 sec on osd.28
>>>>>
>>>>> I did eventually bounce the OSD in question and it hasn't become stuck
>>>>> since, but the MDS is still eating it every time with the "Message too
>>>>> long" error on the session map.
>>>>>
>>>>> I'm not quite sure where to go from here.
>>>>
>>>> First time I had a chance to use the new recover tools. I was able to
>>>> reply the journal, reset it and then reset the sessionmap. MDS
>>>> returned back to life and so far everything looks good. Yay.
>>>>
>>>> Triggering this a bug/issue is a pretty interesting set of steps.
>>>
>>> Spoke too soon, a missing dir is now causing MDS to restart it self.
>>>
>>>     -6> 2015-10-11 22:40:47.300169 7f580c7b9700  5 -- op tracker --
>>> seq: 4, time: 2015-10-11 22:40:47.300168, event: finishing request,
>>> op: client_request(client.3597476:21480382 rmdir #100015e0be2/58
>>> 2015-10-11 21:34:49.224905 RETRY=36)
>>>     -5> 2015-10-11 22:40:47.300208 7f580c7b9700  5 -- op tracker --
>>> seq: 4, time: 2015-10-11 22:40:47.300208, event: cleaned up request,
>>> op: client_request(client.3597476:21480382 rmdir #100015e0be2/58
>>> 2015-10-11 21:34:49.224905 RETRY=36)
>>>     -4> 2015-10-11 22:40:47.300231 7f580c7b9700  5 -- op tracker --
>>> seq: 4, time: 2015-10-11 22:40:47.300231, event: done, op:
>>> client_request(client.3597476:21480382 rmdir #100015e0be2/58
>>> 2015-10-11 21:34:49.224905 RETRY=36)
>>>     -3> 2015-10-11 22:40:47.300284 7f580e0bd700  0
>>> mds.0.cache.dir(100048df076) _fetched missing object for [dir
>>> 100048df076 /petabucket/beta/6d/f6/ [2,head] auth v=0 cv=0/0 ap=1+0+0
>>> state=1073741952 f() n() hs=0+0,ss=0+0 | waiter=1 authpin=1 0x6debcf8]
>>>     -2> 2015-10-11 22:40:47.300313 7f580e0bd700 -1
>>> log_channel(cluster) log [ERR] : dir 100048df076 object missing on
>>> disk; some files may be lost
>>>     -1> 2015-10-11 22:40:47.300549 7f580c7b9700  5 -- op tracker --
>>> seq: 6, time: 2015-10-11 22:40:47.300549, event: acquired locks, op:
>>> client_request(client.3710941:1805 mkdir
>>> #100048df076/1dd4c66b916cf55f19beafc0574ac2dd923bc7efcc5b7757198f323e04d0.dl
>>> 2015-10-11 22:36:32.489705 RETRY=33)
>>>      0> 2015-10-11 22:40:47.303770 7f580c7b9700 -1 mds/CDir.cc: In
>>> function 'fnode_t* CDir::project_fnode()' thread 7f580c7b9700 time
>>> 2015-10-11 22:40:47.300581
>>> mds/CDir.cc: 1267: FAILED assert(get_version() != 0)
>
> Hmm, so something is creating a dentry that points to a dirfrag object
> that doesn't really exist.  What was the series of client operations
> that led to the directory (the one with inode number 0x100048df076)
> being where it is now?  Was it created there, or moved there from
> somewhere?
>
> The way this is meant to work is that the journal we scavenge from
> contains EMetaBlobs that include fnodes, and when we scavenge the
> fnode, we write it into the dirfrag header, creating the dirfrag if it
> doesn't exist.  The same mkdir that creates the dentry should also
> contain the update to the fnode.
>
> It would be really useful to get a copy of the journal at the point
> right before you run "recover dentries".
>
> Separately, I've been extending our journal repair test to see if I
> could get a similar result, I haven't found a path that results in a
> dentry-but-no-dirfrag case, but I can at least get the particular
> assertion you're seeing, so I'll fix that in master.

Sadly I don't have the original cluster data. We re-created a new one
shortly afterwards since we kept running into more of these dir fag
issues after the initial issue. Was just trying to get this up going
back quickly.

>
> John
>
>
>
>
>>>
>>>  ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
>>>  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
>>> const*)+0x8b) [0x94cc1b]
>>>  2: (CDir::project_fnode()+0x2da) [0x75da1a]
>>
>> Worked around this as well. Stopped the client that was trying to
>> access this dir (or delete it, not sure) and move it somewhere else
>> (because I can't delete it).
>>
>> A series of steps that happened today (not sure if all are related) in
>> case the postmortem is interesting to anybody following at home.
>>
>> 1. One (of 5) monitors somehow gets OOM killed after eating up 10gigs of ram.
>> 2. An OSD blocks on an IO request
>> 3. This is when the MSD starts growing it's SessionMap to a place
>> where eventually it gets to big. This is related to bug:
>> http://tracker.ceph.com/issues/10449 (not the client bug, but the too
>> large Session)
>> 4. MDS suicides and then two MDS machines start ping ponging in
>> recovery but always fail to save SessionMap.
>> 5. I stop the MDS, recover the journal and reset the session map.
>> 6. MDS come back to life after that.
>> 7. Ceph loses one directory and as soon as the client tries to access
>> it the MDS also suicides. Standing bug:
>> http://tracker.ceph.com/issues/9880
>> 8. Lame workaround (moving / renaming bad dir) so client doesn't access it.
>>
>> --
>> Milosz Tanski
>> CTO
>> 16 East 34th Street, 15th floor
>> New York, NY 10016
>>
>> p: 646-253-9055
>> e: milosz@adfin.com
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Milosz Tanski
CTO
16 East 34th Street, 15th floor
New York, NY 10016

p: 646-253-9055
e: milosz@adfin.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: MDS stuck in a crash loop
  2015-10-14  4:46             ` Gregory Farnum
@ 2015-10-19 15:31               ` Milosz Tanski
  2015-10-21 18:29                 ` Gregory Farnum
  0 siblings, 1 reply; 19+ messages in thread
From: Milosz Tanski @ 2015-10-19 15:31 UTC (permalink / raw)
  To: Gregory Farnum; +Cc: ceph-devel

On Wed, Oct 14, 2015 at 12:46 AM, Gregory Farnum <gfarnum@redhat.com> wrote:
> On Sun, Oct 11, 2015 at 7:36 PM, Milosz Tanski <milosz@adfin.com> wrote:
>> On Sun, Oct 11, 2015 at 6:44 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>> On Sun, Oct 11, 2015 at 6:01 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>>> On Sun, Oct 11, 2015 at 5:33 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>>>> On Sun, Oct 11, 2015 at 5:24 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>>>>> On Sun, Oct 11, 2015 at 1:16 PM, Gregory Farnum <gfarnum@redhat.com> wrote:
>>>>>>> On Sun, Oct 11, 2015 at 10:09 AM, Milosz Tanski <milosz@adfin.com> wrote:
>>>>>>>> About an hour ago my MDSs (primary and follower) started ping-pong
>>>>>>>> crashing with this message. I've spent about 30 minutes looking into
>>>>>>>> it but nothing yet.
>>>>>>>>
>>>>>>>> This is from a 0.94.3 MDS
>>>>>>>>
>>>>>>>
>>>>>>>>      0> 2015-10-11 17:01:23.596008 7fd4f52ad700 -1 mds/SessionMap.cc:
>>>>>>>> In function 'virtual void C_IO_SM_Save::finish(int)' thread
>>>>>>>> 7fd4f52ad700 time 2015-10-11 17:01:23.594089
>>>>>>>> mds/SessionMap.cc: 120: FAILED assert(r == 0)
>>>>>>>
>>>>>>> These "r == 0" asserts pretty much always mean that the MDS did did a
>>>>>>> read or write to RADOS (the OSDs) and got an error of some kind back.
>>>>>>> (Or in the case of the OSDs, access to the local filesystem returned
>>>>>>> an error, etc.) I don't think these writes include any safety checks
>>>>>>> which would let the MDS break it which means that probably the OSD is
>>>>>>> actually returning an error — odd, but not impossible.
>>>>>>>
>>>>>>> Notice that the assert happened in thread 7fd4f52ad700, and look for
>>>>>>> the stuff in that thread. You should be able to find an OSD op reply
>>>>>>> (on the SessionMap object) coming in and reporting an error code.
>>>>>>> -Greg
>>>>>>
>>>>>> I only two error ops in that whole MDS session. Neither one happened
>>>>>> on the same thread (7f5ab6000700 in this file). But it looks like the
>>>>>> only session map is the -90 "Message too long" one.
>>>>>>
>>>>>> mtanski@tiny:~$ cat single_crash.log | grep 'osd_op_reply' | grep -v
>>>>>> 'ondisk = 0'
>>>>>>  -3946> 2015-10-11 20:51:11.013965 7f5ab20f2700  1 --
>>>>>> 10.0.5.31:6802/27121 <== osd.25 10.0.5.57:6804/32341 6163 ====
>>>>>> osd_op_reply(46349 mds0_sessionmap [writefull 0~95168363] v0'0 uv0
>>>>>> ondisk = -90 ((90) Message too long)) v6 ==== 182+0+0 (2955408122 0 0)
>>>>>> 0x3a55d340 con 0x3d5a3c0
>>>>>>   -705> 2015-10-11 20:51:11.374132 7f5ab22f4700  1 --
>>>>>> 10.0.5.31:6802/27121 <== osd.28 10.0.5.50:6801/1787 5297 ====
>>>>>> osd_op_reply(48004 300.0000e274 [delete] v0'0 uv1349638 ondisk = -2
>>>>>> ((2) No such file or directory)) v6 ==== 179+0+0 (1182549251 0 0)
>>>>>> 0x66c5c80 con 0x3d5a7e0
>>>>>>
>>>>>> Any idea what this could be Greg?
>>>>>
>>>>> To follow this up I found this ticket from 9 months ago:
>>>>> http://tracker.ceph.com/issues/10449 In there Yan says:
>>>>>
>>>>> "it's a kernel bug. hang request prevents mds from trimming
>>>>> completed_requests in sessionmap. there is nothing to do with mds.
>>>>> (maybe we should add some code to MDS to show warning when this bug
>>>>> happens)"
>>>>>
>>>>> When I was debugging this I saw an OSD (not cephfs client) operation
>>>>> stuck for a long time along with the MDS error:
>>>>>
>>>>> HEALTH_WARN 1 requests are blocked > 32 sec; 1 osds have slow
>>>>> requests; mds cluster is degraded; mds0: Behind on trimming (709/30)
>>>>> 1 ops are blocked > 16777.2 sec
>>>>> 1 ops are blocked > 16777.2 sec on osd.28
>>>>>
>>>>> I did eventually bounce the OSD in question and it hasn't become stuck
>>>>> since, but the MDS is still eating it every time with the "Message too
>>>>> long" error on the session map.
>>>>>
>>>>> I'm not quite sure where to go from here.
>>>>
>>>> First time I had a chance to use the new recover tools. I was able to
>>>> reply the journal, reset it and then reset the sessionmap. MDS
>>>> returned back to life and so far everything looks good. Yay.
>>>>
>>>> Triggering this a bug/issue is a pretty interesting set of steps.
>>>
>>> Spoke too soon, a missing dir is now causing MDS to restart it self.
>>>
>>>     -6> 2015-10-11 22:40:47.300169 7f580c7b9700  5 -- op tracker --
>>> seq: 4, time: 2015-10-11 22:40:47.300168, event: finishing request,
>>> op: client_request(client.3597476:21480382 rmdir #100015e0be2/58
>>> 2015-10-11 21:34:49.224905 RETRY=36)
>>>     -5> 2015-10-11 22:40:47.300208 7f580c7b9700  5 -- op tracker --
>>> seq: 4, time: 2015-10-11 22:40:47.300208, event: cleaned up request,
>>> op: client_request(client.3597476:21480382 rmdir #100015e0be2/58
>>> 2015-10-11 21:34:49.224905 RETRY=36)
>>>     -4> 2015-10-11 22:40:47.300231 7f580c7b9700  5 -- op tracker --
>>> seq: 4, time: 2015-10-11 22:40:47.300231, event: done, op:
>>> client_request(client.3597476:21480382 rmdir #100015e0be2/58
>>> 2015-10-11 21:34:49.224905 RETRY=36)
>>>     -3> 2015-10-11 22:40:47.300284 7f580e0bd700  0
>>> mds.0.cache.dir(100048df076) _fetched missing object for [dir
>>> 100048df076 /petabucket/beta/6d/f6/ [2,head] auth v=0 cv=0/0 ap=1+0+0
>>> state=1073741952 f() n() hs=0+0,ss=0+0 | waiter=1 authpin=1 0x6debcf8]
>>>     -2> 2015-10-11 22:40:47.300313 7f580e0bd700 -1
>>> log_channel(cluster) log [ERR] : dir 100048df076 object missing on
>>> disk; some files may be lost
>>>     -1> 2015-10-11 22:40:47.300549 7f580c7b9700  5 -- op tracker --
>>> seq: 6, time: 2015-10-11 22:40:47.300549, event: acquired locks, op:
>>> client_request(client.3710941:1805 mkdir
>>> #100048df076/1dd4c66b916cf55f19beafc0574ac2dd923bc7efcc5b7757198f323e04d0.dl
>>> 2015-10-11 22:36:32.489705 RETRY=33)
>>>      0> 2015-10-11 22:40:47.303770 7f580c7b9700 -1 mds/CDir.cc: In
>>> function 'fnode_t* CDir::project_fnode()' thread 7f580c7b9700 time
>>> 2015-10-11 22:40:47.300581
>>> mds/CDir.cc: 1267: FAILED assert(get_version() != 0)
>>>
>>>  ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
>>>  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
>>> const*)+0x8b) [0x94cc1b]
>>>  2: (CDir::project_fnode()+0x2da) [0x75da1a]
>>
>> Worked around this as well. Stopped the client that was trying to
>> access this dir (or delete it, not sure) and move it somewhere else
>> (because I can't delete it).
>>
>> A series of steps that happened today (not sure if all are related) in
>> case the postmortem is interesting to anybody following at home.
>>
>> 1. One (of 5) monitors somehow gets OOM killed after eating up 10gigs of ram.
>> 2. An OSD blocks on an IO request
>> 3. This is when the MSD starts growing it's SessionMap to a place
>> where eventually it gets to big. This is related to bug:
>> http://tracker.ceph.com/issues/10449 (not the client bug, but the too
>> large Session)
>> 4. MDS suicides and then two MDS machines start ping ponging in
>> recovery but always fail to save SessionMap.
>> 5. I stop the MDS, recover the journal and reset the session map.
>
> Exactly what did you do in this step? I have a suspicion you didn't
> recover and flush out the journal all the way, and that's what caused
> your directory to go bad.
> -Greg

I followed the directions from the hammer recovery page on cephfs. The
backup steps, recovery, truncation and the session clear in order.

>
>> 6. MDS come back to life after that.
>> 7. Ceph loses one directory and as soon as the client tries to access
>> it the MDS also suicides. Standing bug:
>> http://tracker.ceph.com/issues/9880
>> 8. Lame workaround (moving / renaming bad dir) so client doesn't access it.
>>
>> --
>> Milosz Tanski
>> CTO
>> 16 East 34th Street, 15th floor
>> New York, NY 10016
>>
>> p: 646-253-9055
>> e: milosz@adfin.com



-- 
Milosz Tanski
CTO
16 East 34th Street, 15th floor
New York, NY 10016

p: 646-253-9055
e: milosz@adfin.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: MDS stuck in a crash loop
  2015-10-19 15:31               ` Milosz Tanski
@ 2015-10-21 18:29                 ` Gregory Farnum
  2015-10-21 21:33                   ` John Spray
  0 siblings, 1 reply; 19+ messages in thread
From: Gregory Farnum @ 2015-10-21 18:29 UTC (permalink / raw)
  To: John Spray; +Cc: ceph-devel

On Mon, Oct 19, 2015 at 8:31 AM, Milosz Tanski <milosz@adfin.com> wrote:
> On Wed, Oct 14, 2015 at 12:46 AM, Gregory Farnum <gfarnum@redhat.com> wrote:
>> On Sun, Oct 11, 2015 at 7:36 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>> On Sun, Oct 11, 2015 at 6:44 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>>> On Sun, Oct 11, 2015 at 6:01 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>>>> On Sun, Oct 11, 2015 at 5:33 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>>>>> On Sun, Oct 11, 2015 at 5:24 PM, Milosz Tanski <milosz@adfin.com> wrote:
>>>>>>> On Sun, Oct 11, 2015 at 1:16 PM, Gregory Farnum <gfarnum@redhat.com> wrote:
>>>>>>>> On Sun, Oct 11, 2015 at 10:09 AM, Milosz Tanski <milosz@adfin.com> wrote:
>>>>>>>>> About an hour ago my MDSs (primary and follower) started ping-pong
>>>>>>>>> crashing with this message. I've spent about 30 minutes looking into
>>>>>>>>> it but nothing yet.
>>>>>>>>>
>>>>>>>>> This is from a 0.94.3 MDS
>>>>>>>>>
>>>>>>>>
>>>>>>>>>      0> 2015-10-11 17:01:23.596008 7fd4f52ad700 -1 mds/SessionMap.cc:
>>>>>>>>> In function 'virtual void C_IO_SM_Save::finish(int)' thread
>>>>>>>>> 7fd4f52ad700 time 2015-10-11 17:01:23.594089
>>>>>>>>> mds/SessionMap.cc: 120: FAILED assert(r == 0)
>>>>>>>>
>>>>>>>> These "r == 0" asserts pretty much always mean that the MDS did did a
>>>>>>>> read or write to RADOS (the OSDs) and got an error of some kind back.
>>>>>>>> (Or in the case of the OSDs, access to the local filesystem returned
>>>>>>>> an error, etc.) I don't think these writes include any safety checks
>>>>>>>> which would let the MDS break it which means that probably the OSD is
>>>>>>>> actually returning an error — odd, but not impossible.
>>>>>>>>
>>>>>>>> Notice that the assert happened in thread 7fd4f52ad700, and look for
>>>>>>>> the stuff in that thread. You should be able to find an OSD op reply
>>>>>>>> (on the SessionMap object) coming in and reporting an error code.
>>>>>>>> -Greg
>>>>>>>
>>>>>>> I only two error ops in that whole MDS session. Neither one happened
>>>>>>> on the same thread (7f5ab6000700 in this file). But it looks like the
>>>>>>> only session map is the -90 "Message too long" one.
>>>>>>>
>>>>>>> mtanski@tiny:~$ cat single_crash.log | grep 'osd_op_reply' | grep -v
>>>>>>> 'ondisk = 0'
>>>>>>>  -3946> 2015-10-11 20:51:11.013965 7f5ab20f2700  1 --
>>>>>>> 10.0.5.31:6802/27121 <== osd.25 10.0.5.57:6804/32341 6163 ====
>>>>>>> osd_op_reply(46349 mds0_sessionmap [writefull 0~95168363] v0'0 uv0
>>>>>>> ondisk = -90 ((90) Message too long)) v6 ==== 182+0+0 (2955408122 0 0)
>>>>>>> 0x3a55d340 con 0x3d5a3c0
>>>>>>>   -705> 2015-10-11 20:51:11.374132 7f5ab22f4700  1 --
>>>>>>> 10.0.5.31:6802/27121 <== osd.28 10.0.5.50:6801/1787 5297 ====
>>>>>>> osd_op_reply(48004 300.0000e274 [delete] v0'0 uv1349638 ondisk = -2
>>>>>>> ((2) No such file or directory)) v6 ==== 179+0+0 (1182549251 0 0)
>>>>>>> 0x66c5c80 con 0x3d5a7e0
>>>>>>>
>>>>>>> Any idea what this could be Greg?
>>>>>>
>>>>>> To follow this up I found this ticket from 9 months ago:
>>>>>> http://tracker.ceph.com/issues/10449 In there Yan says:
>>>>>>
>>>>>> "it's a kernel bug. hang request prevents mds from trimming
>>>>>> completed_requests in sessionmap. there is nothing to do with mds.
>>>>>> (maybe we should add some code to MDS to show warning when this bug
>>>>>> happens)"
>>>>>>
>>>>>> When I was debugging this I saw an OSD (not cephfs client) operation
>>>>>> stuck for a long time along with the MDS error:
>>>>>>
>>>>>> HEALTH_WARN 1 requests are blocked > 32 sec; 1 osds have slow
>>>>>> requests; mds cluster is degraded; mds0: Behind on trimming (709/30)
>>>>>> 1 ops are blocked > 16777.2 sec
>>>>>> 1 ops are blocked > 16777.2 sec on osd.28
>>>>>>
>>>>>> I did eventually bounce the OSD in question and it hasn't become stuck
>>>>>> since, but the MDS is still eating it every time with the "Message too
>>>>>> long" error on the session map.
>>>>>>
>>>>>> I'm not quite sure where to go from here.
>>>>>
>>>>> First time I had a chance to use the new recover tools. I was able to
>>>>> reply the journal, reset it and then reset the sessionmap. MDS
>>>>> returned back to life and so far everything looks good. Yay.
>>>>>
>>>>> Triggering this a bug/issue is a pretty interesting set of steps.
>>>>
>>>> Spoke too soon, a missing dir is now causing MDS to restart it self.
>>>>
>>>>     -6> 2015-10-11 22:40:47.300169 7f580c7b9700  5 -- op tracker --
>>>> seq: 4, time: 2015-10-11 22:40:47.300168, event: finishing request,
>>>> op: client_request(client.3597476:21480382 rmdir #100015e0be2/58
>>>> 2015-10-11 21:34:49.224905 RETRY=36)
>>>>     -5> 2015-10-11 22:40:47.300208 7f580c7b9700  5 -- op tracker --
>>>> seq: 4, time: 2015-10-11 22:40:47.300208, event: cleaned up request,
>>>> op: client_request(client.3597476:21480382 rmdir #100015e0be2/58
>>>> 2015-10-11 21:34:49.224905 RETRY=36)
>>>>     -4> 2015-10-11 22:40:47.300231 7f580c7b9700  5 -- op tracker --
>>>> seq: 4, time: 2015-10-11 22:40:47.300231, event: done, op:
>>>> client_request(client.3597476:21480382 rmdir #100015e0be2/58
>>>> 2015-10-11 21:34:49.224905 RETRY=36)
>>>>     -3> 2015-10-11 22:40:47.300284 7f580e0bd700  0
>>>> mds.0.cache.dir(100048df076) _fetched missing object for [dir
>>>> 100048df076 /petabucket/beta/6d/f6/ [2,head] auth v=0 cv=0/0 ap=1+0+0
>>>> state=1073741952 f() n() hs=0+0,ss=0+0 | waiter=1 authpin=1 0x6debcf8]
>>>>     -2> 2015-10-11 22:40:47.300313 7f580e0bd700 -1
>>>> log_channel(cluster) log [ERR] : dir 100048df076 object missing on
>>>> disk; some files may be lost
>>>>     -1> 2015-10-11 22:40:47.300549 7f580c7b9700  5 -- op tracker --
>>>> seq: 6, time: 2015-10-11 22:40:47.300549, event: acquired locks, op:
>>>> client_request(client.3710941:1805 mkdir
>>>> #100048df076/1dd4c66b916cf55f19beafc0574ac2dd923bc7efcc5b7757198f323e04d0.dl
>>>> 2015-10-11 22:36:32.489705 RETRY=33)
>>>>      0> 2015-10-11 22:40:47.303770 7f580c7b9700 -1 mds/CDir.cc: In
>>>> function 'fnode_t* CDir::project_fnode()' thread 7f580c7b9700 time
>>>> 2015-10-11 22:40:47.300581
>>>> mds/CDir.cc: 1267: FAILED assert(get_version() != 0)
>>>>
>>>>  ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
>>>>  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
>>>> const*)+0x8b) [0x94cc1b]
>>>>  2: (CDir::project_fnode()+0x2da) [0x75da1a]
>>>
>>> Worked around this as well. Stopped the client that was trying to
>>> access this dir (or delete it, not sure) and move it somewhere else
>>> (because I can't delete it).
>>>
>>> A series of steps that happened today (not sure if all are related) in
>>> case the postmortem is interesting to anybody following at home.
>>>
>>> 1. One (of 5) monitors somehow gets OOM killed after eating up 10gigs of ram.
>>> 2. An OSD blocks on an IO request
>>> 3. This is when the MSD starts growing it's SessionMap to a place
>>> where eventually it gets to big. This is related to bug:
>>> http://tracker.ceph.com/issues/10449 (not the client bug, but the too
>>> large Session)
>>> 4. MDS suicides and then two MDS machines start ping ponging in
>>> recovery but always fail to save SessionMap.
>>> 5. I stop the MDS, recover the journal and reset the session map.
>>
>> Exactly what did you do in this step? I have a suspicion you didn't
>> recover and flush out the journal all the way, and that's what caused
>> your directory to go bad.
>> -Greg
>
> I followed the directions from the hammer recovery page on cephfs. The
> backup steps, recovery, truncation and the session clear in order.

John, I know you've got
https://github.com/ceph/ceph-qa-suite/pull/647. I think that's
supposed to be for this, but I'm not sure if you spotted any issues
with it or if we need to do some more diagnosing?
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: MDS stuck in a crash loop
  2015-10-21 18:29                 ` Gregory Farnum
@ 2015-10-21 21:33                   ` John Spray
  2015-10-21 21:33                     ` John Spray
  0 siblings, 1 reply; 19+ messages in thread
From: John Spray @ 2015-10-21 21:33 UTC (permalink / raw)
  To: Gregory Farnum; +Cc: ceph-devel

> John, I know you've got
> https://github.com/ceph/ceph-qa-suite/pull/647. I think that's
> supposed to be for this, but I'm not sure if you spotted any issues
> with it or if we need to do some more diagnosing?

That test path is just verifying that we do handle dirs without dying
in at least one case -- it passes with the existing ceph code, so it's
not reproducing this issue.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: MDS stuck in a crash loop
  2015-10-21 21:33                   ` John Spray
@ 2015-10-21 21:33                     ` John Spray
  2015-10-21 21:34                       ` Gregory Farnum
  2015-10-22 12:43                       ` Milosz Tanski
  0 siblings, 2 replies; 19+ messages in thread
From: John Spray @ 2015-10-21 21:33 UTC (permalink / raw)
  To: Gregory Farnum; +Cc: ceph-devel

On Wed, Oct 21, 2015 at 10:33 PM, John Spray <jspray@redhat.com> wrote:
>> John, I know you've got
>> https://github.com/ceph/ceph-qa-suite/pull/647. I think that's
>> supposed to be for this, but I'm not sure if you spotted any issues
>> with it or if we need to do some more diagnosing?
>
> That test path is just verifying that we do handle dirs without dying
> in at least one case -- it passes with the existing ceph code, so it's
> not reproducing this issue.

Clicked send to soon, I was about to add...

Milosz mentioned that they don't have the data from the system in the
broken state, so I don't have any bright ideas about learning more
about what went wrong here unfortunately.

John

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: MDS stuck in a crash loop
  2015-10-21 21:33                     ` John Spray
@ 2015-10-21 21:34                       ` Gregory Farnum
  2015-10-22 12:43                       ` Milosz Tanski
  1 sibling, 0 replies; 19+ messages in thread
From: Gregory Farnum @ 2015-10-21 21:34 UTC (permalink / raw)
  To: John Spray; +Cc: ceph-devel

On Wed, Oct 21, 2015 at 2:33 PM, John Spray <jspray@redhat.com> wrote:
> On Wed, Oct 21, 2015 at 10:33 PM, John Spray <jspray@redhat.com> wrote:
>>> John, I know you've got
>>> https://github.com/ceph/ceph-qa-suite/pull/647. I think that's
>>> supposed to be for this, but I'm not sure if you spotted any issues
>>> with it or if we need to do some more diagnosing?
>>
>> That test path is just verifying that we do handle dirs without dying
>> in at least one case -- it passes with the existing ceph code, so it's
>> not reproducing this issue.
>
> Clicked send to soon, I was about to add...
>
> Milosz mentioned that they don't have the data from the system in the
> broken state, so I don't have any bright ideas about learning more
> about what went wrong here unfortunately.

Yeah, I guess we'll just need to watch out for it in the future. :/

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: MDS stuck in a crash loop
  2015-10-21 21:33                     ` John Spray
  2015-10-21 21:34                       ` Gregory Farnum
@ 2015-10-22 12:43                       ` Milosz Tanski
  2015-10-22 12:48                         ` John Spray
  1 sibling, 1 reply; 19+ messages in thread
From: Milosz Tanski @ 2015-10-22 12:43 UTC (permalink / raw)
  To: John Spray; +Cc: Gregory Farnum, ceph-devel

On Wed, Oct 21, 2015 at 5:33 PM, John Spray <jspray@redhat.com> wrote:
> On Wed, Oct 21, 2015 at 10:33 PM, John Spray <jspray@redhat.com> wrote:
>>> John, I know you've got
>>> https://github.com/ceph/ceph-qa-suite/pull/647. I think that's
>>> supposed to be for this, but I'm not sure if you spotted any issues
>>> with it or if we need to do some more diagnosing?
>>
>> That test path is just verifying that we do handle dirs without dying
>> in at least one case -- it passes with the existing ceph code, so it's
>> not reproducing this issue.
>
> Clicked send to soon, I was about to add...
>
> Milosz mentioned that they don't have the data from the system in the
> broken state, so I don't have any bright ideas about learning more
> about what went wrong here unfortunately.
>

Sorry about that, wasn't thinking at the time and just wanted to get
this up and going as quickly as possible :(

If this happens next time I'll be more careful to keep more evidence.
I think multi-fs in the same rados namespace support would actually
helped here, since it makes it easier to create a newfs and leave the
other one around (for investigation)

But makes me wonder that the broken dir scenario can probably be
replicated by hand using rados calls. There's a pretty generic ticket
there for don't die on dir errors, but I imagine the code can be
audited and steps to cause a synthetic error can be produced.

-- 
Milosz Tanski
CTO
16 East 34th Street, 15th floor
New York, NY 10016

p: 646-253-9055
e: milosz@adfin.com

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: MDS stuck in a crash loop
  2015-10-22 12:43                       ` Milosz Tanski
@ 2015-10-22 12:48                         ` John Spray
  2015-10-22 13:14                           ` Sage Weil
  2015-10-22 15:51                           ` Milosz Tanski
  0 siblings, 2 replies; 19+ messages in thread
From: John Spray @ 2015-10-22 12:48 UTC (permalink / raw)
  To: Milosz Tanski; +Cc: Gregory Farnum, ceph-devel

On Thu, Oct 22, 2015 at 1:43 PM, Milosz Tanski <milosz@adfin.com> wrote:
> On Wed, Oct 21, 2015 at 5:33 PM, John Spray <jspray@redhat.com> wrote:
>> On Wed, Oct 21, 2015 at 10:33 PM, John Spray <jspray@redhat.com> wrote:
>>>> John, I know you've got
>>>> https://github.com/ceph/ceph-qa-suite/pull/647. I think that's
>>>> supposed to be for this, but I'm not sure if you spotted any issues
>>>> with it or if we need to do some more diagnosing?
>>>
>>> That test path is just verifying that we do handle dirs without dying
>>> in at least one case -- it passes with the existing ceph code, so it's
>>> not reproducing this issue.
>>
>> Clicked send to soon, I was about to add...
>>
>> Milosz mentioned that they don't have the data from the system in the
>> broken state, so I don't have any bright ideas about learning more
>> about what went wrong here unfortunately.
>>
>
> Sorry about that, wasn't thinking at the time and just wanted to get
> this up and going as quickly as possible :(
>
> If this happens next time I'll be more careful to keep more evidence.
> I think multi-fs in the same rados namespace support would actually
> helped here, since it makes it easier to create a newfs and leave the
> other one around (for investigation)

Yep, good point.  I am a known enthusiast for multi-filesystem support :-)

> But makes me wonder that the broken dir scenario can probably be
> replicated by hand using rados calls. There's a pretty generic ticket
> there for don't die on dir errors, but I imagine the code can be
> audited and steps to cause a synthetic error can be produced.

Yes, that part I have done (and will build into the automated tests in
due course) -- the bit that is still a mystery is how the damage
occurred to begin with.

John

>
> --
> Milosz Tanski
> CTO
> 16 East 34th Street, 15th floor
> New York, NY 10016
>
> p: 646-253-9055
> e: milosz@adfin.com

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: MDS stuck in a crash loop
  2015-10-22 12:48                         ` John Spray
@ 2015-10-22 13:14                           ` Sage Weil
  2015-10-22 15:51                           ` Milosz Tanski
  1 sibling, 0 replies; 19+ messages in thread
From: Sage Weil @ 2015-10-22 13:14 UTC (permalink / raw)
  To: John Spray; +Cc: Milosz Tanski, Gregory Farnum, ceph-devel

On Thu, 22 Oct 2015, John Spray wrote:
> On Thu, Oct 22, 2015 at 1:43 PM, Milosz Tanski <milosz@adfin.com> wrote:
> > On Wed, Oct 21, 2015 at 5:33 PM, John Spray <jspray@redhat.com> wrote:
> >> On Wed, Oct 21, 2015 at 10:33 PM, John Spray <jspray@redhat.com> wrote:
> >>>> John, I know you've got
> >>>> https://github.com/ceph/ceph-qa-suite/pull/647. I think that's
> >>>> supposed to be for this, but I'm not sure if you spotted any issues
> >>>> with it or if we need to do some more diagnosing?
> >>>
> >>> That test path is just verifying that we do handle dirs without dying
> >>> in at least one case -- it passes with the existing ceph code, so it's
> >>> not reproducing this issue.
> >>
> >> Clicked send to soon, I was about to add...
> >>
> >> Milosz mentioned that they don't have the data from the system in the
> >> broken state, so I don't have any bright ideas about learning more
> >> about what went wrong here unfortunately.
> >>
> >
> > Sorry about that, wasn't thinking at the time and just wanted to get
> > this up and going as quickly as possible :(
> >
> > If this happens next time I'll be more careful to keep more evidence.
> > I think multi-fs in the same rados namespace support would actually
> > helped here, since it makes it easier to create a newfs and leave the
> > other one around (for investigation)
> 
> Yep, good point.  I am a known enthusiast for multi-filesystem support :-)

A rados pool export on the metadata pool would have helped, too.  That 
doesn't include data object backtrace metadata, though.  I wonder if we 
should make a cephfs metadata imager tool to capture the metadata state of 
the file system (similar to the tools that are available for xfs) that 
captures both.  On the data pool side it'd just record the object names, 
xattrs, and object size, ignoring the data.

It wouldn't anonymize filenames (that is tricky without breaking the mds 
dir hashing), but it excludes data and would probably be 
sufficient for most users...

sage

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: MDS stuck in a crash loop
  2015-10-22 12:48                         ` John Spray
  2015-10-22 13:14                           ` Sage Weil
@ 2015-10-22 15:51                           ` Milosz Tanski
  1 sibling, 0 replies; 19+ messages in thread
From: Milosz Tanski @ 2015-10-22 15:51 UTC (permalink / raw)
  To: John Spray; +Cc: Gregory Farnum, ceph-devel

On Thu, Oct 22, 2015 at 8:48 AM, John Spray <jspray@redhat.com> wrote:
> On Thu, Oct 22, 2015 at 1:43 PM, Milosz Tanski <milosz@adfin.com> wrote:
>> On Wed, Oct 21, 2015 at 5:33 PM, John Spray <jspray@redhat.com> wrote:
>>> On Wed, Oct 21, 2015 at 10:33 PM, John Spray <jspray@redhat.com> wrote:
>>>>> John, I know you've got
>>>>> https://github.com/ceph/ceph-qa-suite/pull/647. I think that's
>>>>> supposed to be for this, but I'm not sure if you spotted any issues
>>>>> with it or if we need to do some more diagnosing?
>>>>
>>>> That test path is just verifying that we do handle dirs without dying
>>>> in at least one case -- it passes with the existing ceph code, so it's
>>>> not reproducing this issue.
>>>
>>> Clicked send to soon, I was about to add...
>>>
>>> Milosz mentioned that they don't have the data from the system in the
>>> broken state, so I don't have any bright ideas about learning more
>>> about what went wrong here unfortunately.
>>>
>>
>> Sorry about that, wasn't thinking at the time and just wanted to get
>> this up and going as quickly as possible :(
>>
>> If this happens next time I'll be more careful to keep more evidence.
>> I think multi-fs in the same rados namespace support would actually
>> helped here, since it makes it easier to create a newfs and leave the
>> other one around (for investigation)
>
> Yep, good point.  I am a known enthusiast for multi-filesystem support :-)
>
>> But makes me wonder that the broken dir scenario can probably be
>> replicated by hand using rados calls. There's a pretty generic ticket
>> there for don't die on dir errors, but I imagine the code can be
>> audited and steps to cause a synthetic error can be produced.
>
> Yes, that part I have done (and will build into the automated tests in
> due course) -- the bit that is still a mystery is how the damage
> occurred to begin with.

John, my money is on me somehow fumbling the recovery process. And,
without the bash history falling off I'm going to assume that.

-- 
Milosz Tanski
CTO
16 East 34th Street, 15th floor
New York, NY 10016

p: 646-253-9055
e: milosz@adfin.com

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2015-10-22 15:52 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-11 17:09 MDS stuck in a crash loop Milosz Tanski
2015-10-11 17:16 ` Gregory Farnum
2015-10-11 21:24   ` Milosz Tanski
2015-10-11 21:33     ` Milosz Tanski
2015-10-11 22:01       ` Milosz Tanski
2015-10-11 22:44         ` Milosz Tanski
2015-10-12  2:36           ` Milosz Tanski
2015-10-14  4:46             ` Gregory Farnum
2015-10-19 15:31               ` Milosz Tanski
2015-10-21 18:29                 ` Gregory Farnum
2015-10-21 21:33                   ` John Spray
2015-10-21 21:33                     ` John Spray
2015-10-21 21:34                       ` Gregory Farnum
2015-10-22 12:43                       ` Milosz Tanski
2015-10-22 12:48                         ` John Spray
2015-10-22 13:14                           ` Sage Weil
2015-10-22 15:51                           ` Milosz Tanski
2015-10-14 13:21             ` John Spray
2015-10-19 15:28               ` Milosz Tanski

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.