All of lore.kernel.org
 help / color / mirror / Atom feed
* boost recoverystate handle_log fault
@ 2011-07-11 14:26 huang jun
  2011-07-11 19:38 ` Samuel Just
  0 siblings, 1 reply; 3+ messages in thread
From: huang jun @ 2011-07-11 14:26 UTC (permalink / raw)
  To: ceph-devel

[-- Attachment #1: Type: text/plain, Size: 558 bytes --]

hi,all
I use ceph v0.30 on 31osds, on linux 2.6.37
after i set up the whole cluster, there are many (10) osds going down
because the cosd process was killed, and we can provide the osd log
in attach file "osd-failed".

and this phenomenon occured once a week ago.At first we fixed it
by just rebuilding the cluster, but this time we will not try that method.
we want to find where lead this failed happen.
why did the simplemessenger always send RETSETSESSION?
whar lead the boost:recovery failed ? can you give some constructive advices?

thanks in advance

[-- Attachment #2: osd-failed.txt --]
[-- Type: text/plain, Size: 14101 bytes --]

2011-07-11 16:22:47.580065 4eafb950 -- 192.168.0.118:6800/26500 >> 192.168.0.212:0/1997725251 pipe(0xe684780 sd=16 pgs=0 cs=0 l=0).a
ccept peer addr is really 192.168.0.212:0/1997725251 (socket is 192.168.0.212:41488/0)
2011-07-11 16:22:47.590243 50b1b950 -- 192.168.0.118:6800/26500 >> 192.168.0.206:0/1951212382 pipe(0xe684000 sd=17 pgs=0 cs=0 l=0).a
ccept peer addr is really 192.168.0.206:0/1951212382 (socket is 192.168.0.206:51650/0)
2011-07-11 16:22:47.591856 4eeff950 -- 192.168.0.118:6800/26500 >> 192.168.0.213:0/2033517006 pipe(0xad7b280 sd=18 pgs=0 cs=0 l=0).a
ccept peer addr is really 192.168.0.213:0/2033517006 (socket is 192.168.0.213:52987/0)
2011-07-11 16:22:47.600186 57989950 -- 192.168.0.118:6800/26500 >> 192.168.0.214:0/10984431 pipe(0xad7b500 sd=19 pgs=0 cs=0 l=0).acc
ept peer addr is really 192.168.0.214:0/10984431 (socket is 192.168.0.214:35701/0)
2011-07-11 16:22:47.835041 5afbf950 -- 192.168.0.118:6802/26500 >> 192.168.0.109:6805/14310 pipe(0x2c4b000 sd=20 pgs=0 cs=0 l=0).acc
ept we reset (peer sent cseq 2), sending RESETSESSION
2011-07-11 16:22:48.819890 4eeff950 -- 192.168.0.118:6801/26500 >> 192.168.0.109:6804/14310 pipe(0x2c4bc80 sd=12 pgs=0 cs=0 l=0).acc
ept we reset (peer sent cseq 2), sending RESETSESSION
2011-07-11 16:22:49.167883 4e8f9950 -- 192.168.0.118:6802/26500 >> 192.168.0.106:6805/14457 pipe(0x2c4b780 sd=14 pgs=0 cs=0 l=0).acc
ept we reset (peer sent cseq 2), sending RESETSESSION
2011-07-11 16:22:49.179942 57989950 -- 192.168.0.118:6802/26500 >> 192.168.0.105:6805/10603 pipe(0xe726a00 sd=15 pgs=0 cs=0 l=0).acc
ept we reset (peer sent cseq 2), sending RESETSESSION
2011-07-11 16:22:49.526557 58090950 -- 192.168.0.118:6802/26500 >> 192.168.0.106:6802/14367 pipe(0xe726280 sd=18 pgs=0 cs=0 l=0).acc
ept we reset (peer sent cseq 2), sending RESETSESSION
2011-07-11 16:22:50.280940 4eeff950 -- 192.168.0.118:6801/26500 >> 192.168.0.109:6804/14310 pipe(0x2c4bc80 sd=12 pgs=1162 cs=1 l=0).
fault with nothing to send, going to standby
2011-07-11 16:22:50.353199 5afbf950 -- 192.168.0.118:6800/26500 >> 192.168.0.207:0/1569463766 pipe(0xad7b780 sd=20 pgs=0 cs=0 l=0).a
ccept peer addr is really 192.168.0.207:0/1569463766 (socket is 192.168.0.207:59047/0)
2011-07-11 16:22:50.353827 56676950 -- 192.168.0.118:6800/26500 >> 192.168.0.210:0/2923411330 pipe(0xad7bc80 sd=48 pgs=0 cs=0 l=0).a
ccept peer addr is really 192.168.0.210:0/2923411330 (socket is 192.168.0.210:58943/0)
2011-07-11 16:22:50.356753 56f7f950 -- 192.168.0.118:6800/26500 >> 192.168.0.213:0/2033517006 pipe(0xf051280 sd=50 pgs=0 cs=0 l=0).a
ccept peer addr is really 192.168.0.213:0/2033517006 (socket is 192.168.0.213:52989/0)
2011-07-11 16:22:50.359422 56e7e950 -- 192.168.0.118:6800/26500 >> 192.168.0.214:0/10984431 pipe(0xf051000 sd=49 pgs=0 cs=0 l=0).acc
ept peer addr is really 192.168.0.214:0/10984431 (socket is 192.168.0.214:35703/0)
2011-07-11 16:22:50.360825 57585950 -- 192.168.0.118:6800/26500 >> 192.168.0.211:0/1980302047 pipe(0xacfc780 sd=51 pgs=0 cs=0 l=0).a
ccept peer addr is really 192.168.0.211:0/1980302047 (socket is 192.168.0.211:42706/0)
2011-07-11 16:22:50.361602 57e8e950 -- 192.168.0.118:6800/26500 >> 192.168.0.212:0/1997725251 pipe(0xacfc500 sd=52 pgs=0 cs=0 l=0).a
ccept peer addr is really 192.168.0.212:0/1997725251 (socket is 192.168.0.212:41489/0)
2011-07-11 16:22:50.387648 58696950 -- 192.168.0.118:6800/26500 >> 192.168.0.208:0/115956664 pipe(0xacfc000 sd=53 pgs=0 cs=0 l=0).ac
cept peer addr is really 192.168.0.208:0/115956664 (socket is 192.168.0.208:57194/0)
2011-07-11 16:22:50.434862 58b9b950 -- 192.168.0.118:6800/26500 >> 192.168.0.206:0/1951212382 pipe(0xacfc280 sd=54 pgs=0 cs=0 l=0).a
ccept peer addr is really 192.168.0.206:0/1951212382 (socket is 192.168.0.206:51652/0)
2011-07-11 16:22:50.445896 58f9f950 -- 192.168.0.118:6800/26500 >> 192.168.0.209:0/300110026 pipe(0xacfcc80 sd=55 pgs=0 cs=0 l=0).ac
cept peer addr is really 192.168.0.209:0/300110026 (socket is 192.168.0.209:38556/0)
2011-07-11 16:22:50.999712 57787950 -- 192.168.0.118:6801/26500 >> 192.168.0.109:6804/14310 pipe(0x2c4bc80 sd=12 pgs=1162 cs=2 l=0).
connect got RESETSESSION
2011-07-11 16:22:52.946266 4b6f1950 log [INF] : 1.4c7 scrub ok
osd/PG.cc: In function 'PG::RecoveryState::Crashed::Crashed(boost::statechart::state<PG::RecoveryState::Crashed, PG::RecoveryState::
RecoveryMachine, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na
, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_m
ode)0u>::my_context)', in thread '0x49cec950'
osd/PG.cc: 3882: FAILED assert(0 == "we got a bad state machine event")
 
 1: (PG::RecoveryState::Crashed::Crashed(boost::statechart::state<PG::RecoveryState::Crashed, PG::RecoveryState::RecoveryMachine, bo
ost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::n
a, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::my_context)
+0xb6) [0x562116]
 2: (boost::statechart::detail::inner_constructor<boost::mpl::l_item<mpl_::long_<1l>, PG::RecoveryState::Crashed, boost::mpl::l_end>
, boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::stat
echart::null_exception_translator> >::construct(boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoverySta
te::Initial, std::allocator<void>, boost::statechart::null_exception_translator>* const&, boost::statechart::state_machine<PG::Recov
eryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>&)+0x26) [
0x59fb86]
 3: (boost::statechart::simple_state<PG::RecoveryState::Started, PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Start, (boos
t::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0xc8) [0x5a0448]
 4: (boost::statechart::simple_state<PG::RecoveryState::Primary, PG::RecoveryState::Started, PG::RecoveryState::Peering, (boost::sta
techart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0xf9) [0x5a2e49]
 5: (boost::statechart::simple_state<PG::RecoveryState::Peering, PG::RecoveryState::Primary, PG::RecoveryState::GetInfo, (boost::sta
techart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0x99) [0x5a3f19]
 6: (boost::statechart::simple_state<PG::RecoveryState::GetInfo, PG::RecoveryState::Peering, boost::mpl::list<mpl_::na, mpl_::na, mp
l_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_
::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&,
 void const*)+0x9e) [0x5a516e]
 7: (boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::s
tatechart::null_exception_translator>::process_event(boost::statechart::event_base const&)+0x6b) [0x5a1dab]
 8: (PG::RecoveryState::handle_log(int, MOSDPGLog*, PG::RecoveryCtx*)+0x14a) [0x577c0a]
 9: (OSD::handle_pg_log(MOSDPGLog*)+0x344) [0x51a064]
 10: (OSD::_dispatch(Message*)+0x4ed) [0x5232ad]
 11: (OSD::ms_dispatch(Message*)+0xd9) [0x523cf9]
 12: (SimpleMessenger::dispatch_entry()+0x8e3) [0x6175f3]
 13: (SimpleMessenger::DispatchThread::entry()+0x1c) [0x49140c]
 14: /lib/libpthread.so.0 [0x7fae8e8a0fc7]
 15: (clone()+0x6d) [0x7fae8d51164d]

1: (PG::RecoveryState::Crashed::Crashed(boost::statechart::state<PG::RecoveryState::Crashed, PG::RecoveryState::RecoveryMachine, bo
ost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::n
a, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::my_context)
+0xb6) [0x562116]
 2: (boost::statechart::detail::inner_constructor<boost::mpl::l_item<mpl_::long_<1l>, PG::RecoveryState::Crashed, boost::mpl::l_end>
, boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::stat
echart::null_exception_translator> >::construct(boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoverySta
te::Initial, std::allocator<void>, boost::statechart::null_exception_translator>* const&, boost::statechart::state_machine<PG::Recov
eryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>&)+0x26) [
0x59fb86]
 3: (boost::statechart::simple_state<PG::RecoveryState::Started, PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Start, (boos
t::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0xc8) [0x5a0448]
 4: (boost::statechart::simple_state<PG::RecoveryState::Primary, PG::RecoveryState::Started, PG::RecoveryState::Peering, (boost::sta
techart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0xf9) [0x5a2e49]
 5: (boost::statechart::simple_state<PG::RecoveryState::Peering, PG::RecoveryState::Primary, PG::RecoveryState::GetInfo, (boost::sta
techart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0x99) [0x5a3f19]
 6: (boost::statechart::simple_state<PG::RecoveryState::GetInfo, PG::RecoveryState::Peering, boost::mpl::list<mpl_::na, mpl_::na, mp
l_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_
::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&,
 void const*)+0x9e) [0x5a516e]
 7: (boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::s
tatechart::null_exception_translator>::process_event(boost::statechart::event_base const&)+0x6b) [0x5a1dab]
 8: (PG::RecoveryState::handle_log(int, MOSDPGLog*, PG::RecoveryCtx*)+0x14a) [0x577c0a]
 9: (OSD::handle_pg_log(MOSDPGLog*)+0x344) [0x51a064]
 10: (OSD::_dispatch(Message*)+0x4ed) [0x5232ad]
 11: (OSD::ms_dispatch(Message*)+0xd9) [0x523cf9]
 12: (SimpleMessenger::dispatch_entry()+0x8e3) [0x6175f3]
 13: (SimpleMessenger::DispatchThread::entry()+0x1c) [0x49140c]
 14: /lib/libpthread.so.0 [0x7fae8e8a0fc7]
 15: (clone()+0x6d) [0x7fae8d51164d]
*** Caught signal (Aborted) **
 in thread 0x49cec950
 1: /bsd/bin/cosd [0x63dce2]
 2: /lib/libpthread.so.0 [0x7fae8e8a8a80]
 3: (gsignal()+0x35) [0x7fae8d473ed5]
 4: (abort()+0x183) [0x7fae8d4753f3]
 5: (__gnu_cxx::__verbose_terminate_handler()+0x115) [0x7fae8dcfbdc5]
 6: /usr/lib/libstdc++.so.6 [0x7fae8dcfa166]
 7: /usr/lib/libstdc++.so.6 [0x7fae8dcfa193]
 8: /usr/lib/libstdc++.so.6 [0x7fae8dcfa28e]
 9: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x37d) [0x6067dd]
 10: (PG::RecoveryState::Crashed::Crashed(boost::statechart::state<PG::RecoveryState::Crashed, PG::RecoveryState::RecoveryMachine, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::my_context)+0xb6) [0x562116]
 11: (boost::statechart::detail::inner_constructor<boost::mpl::l_item<mpl_::long_<1l>, PG::RecoveryState::Crashed, boost::mpl::l_end>, boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator> >::construct(boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>* const&, boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>&)+0x26) [0x59fb86]
 12: (boost::statechart::simple_state<PG::RecoveryState::Started, PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Start, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0xc8) [0x5a0448]
 13: (boost::statechart::simple_state<PG::RecoveryState::Primary, PG::RecoveryState::Started, PG::RecoveryState::Peering, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0xf9) [0x5a2e49]
 14: (boost::statechart::simple_state<PG::RecoveryState::Peering, PG::RecoveryState::Primary, PG::RecoveryState::GetInfo, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0x99) [0x5a3f19]
 15: (boost::statechart::simple_state<PG::RecoveryState::GetInfo, PG::RecoveryState::Peering, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0x9e) [0x5a516e]
 16: (boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>::process_event(boost::statechart::event_base const&)+0x6b) [0x5a1dab]
 17: (PG::RecoveryState::handle_log(int, MOSDPGLog*, PG::RecoveryCtx*)+0x14a) [0x577c0a]
 18: (OSD::handle_pg_log(MOSDPGLog*)+0x344) [0x51a064]
 19: (OSD::_dispatch(Message*)+0x4ed) [0x5232ad]
 20: (OSD::ms_dispatch(Message*)+0xd9) [0x523cf9]
 21: (SimpleMessenger::dispatch_entry()+0x8e3) [0x6175f3]
 22: (SimpleMessenger::DispatchThread::entry()+0x1c) [0x49140c]
 23: /lib/libpthread.so.0 [0x7fae8e8a0fc7]
 24: (clone()+0x6d) [0x7fae8d51164d]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: boost recoverystate handle_log fault
  2011-07-11 14:26 boost recoverystate handle_log fault huang jun
@ 2011-07-11 19:38 ` Samuel Just
  2011-08-04 13:35   ` huang jun
  0 siblings, 1 reply; 3+ messages in thread
From: Samuel Just @ 2011-07-11 19:38 UTC (permalink / raw)
  To: ceph-devel

The messenger errors probably indicate that that OSD's peers are down.
The boost errors are a result of the OSD receiving a log message in the
GetInfo state.  This indicates a bug in the peering state machine.  Is
there a way that you could get us more complete logs?  I need an idea
of what happened to cause the erroneous log message.

Thanks!
-Sam

On 07/11/2011 07:26 AM, huang jun wrote:
> hi,all
> I use ceph v0.30 on 31osds, on linux 2.6.37
> after i set up the whole cluster, there are many (10) osds going down
> because the cosd process was killed, and we can provide the osd log
> in attach file "osd-failed".
>
> and this phenomenon occured once a week ago.At first we fixed it
> by just rebuilding the cluster, but this time we will not try that method.
> we want to find where lead this failed happen.
> why did the simplemessenger always send RETSETSESSION?
> whar lead the boost:recovery failed ? can you give some constructive advices?
>
> thanks in advance


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: boost recoverystate handle_log fault
  2011-07-11 19:38 ` Samuel Just
@ 2011-08-04 13:35   ` huang jun
  0 siblings, 0 replies; 3+ messages in thread
From: huang jun @ 2011-08-04 13:35 UTC (permalink / raw)
  To: samuel.just; +Cc: ceph-devel

[-- Attachment #1: Type: text/plain, Size: 2322 bytes --]

hi,sam
we meet this problem many times recently.
we use ceph v0.30 and linux 2.6.37.
we build a cluster with 20 OSDs.But not all the OSDs are started. at
first we just start 10 osds. and then we use 10 kernel clients to
write 20GB data in total.
now we start another 10 osds by "/etc/init.d/ceph start " (almost at
same time), then unusual things happened. a few osds down, there are
many crashed PGs in pg dump, the OSD debug file. see attach file
"osd.17.log".

i have some questions:
1) did your test term test recovery performance by concurrently adding
10 or more OSDs?
   if so, everything works well?
2) the "Crashed" state was casted in "Initial" state or "Started"
state? we think it Initial state. we can see it exits Started state.
3) what events that lead the statemachine going to Crashed occured ?
 what  did" inner_constructor<boost::mpl::l_item<mpl_::long_<1l>,
PG::RecoveryState::Crashed, boost::mpl::l_end> " mean here? i can't
find  introductions about this.

thanks in advance!

2011/7/12 Samuel Just <samuelj@hq.newdream.net>:
> The messenger errors probably indicate that that OSD's peers are down.
> The boost errors are a result of the OSD receiving a log message in the
> GetInfo state.  This indicates a bug in the peering state machine.  Is
> there a way that you could get us more complete logs?  I need an idea
> of what happened to cause the erroneous log message.
>
> Thanks!
> -Sam
>
> On 07/11/2011 07:26 AM, huang jun wrote:
>>
>> hi,all
>> I use ceph v0.30 on 31osds, on linux 2.6.37
>> after i set up the whole cluster, there are many (10) osds going down
>> because the cosd process was killed, and we can provide the osd log
>> in attach file "osd-failed".
>>
>> and this phenomenon occured once a week ago.At first we fixed it
>> by just rebuilding the cluster, but this time we will not try that method.
>> we want to find where lead this failed happen.
>> why did the simplemessenger always send RETSETSESSION?
>> whar lead the boost:recovery failed ? can you give some constructive
>> advices?
>>
>> thanks in advance
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

[-- Attachment #2: osd.17.log --]
[-- Type: application/octet-stream, Size: 70433 bytes --]

2011-08-03 21:36:25.518720 499d9950 osd17 51 trim_map_bl_cache up to 52
2011-08-03 21:36:25.518733 499d9950 osd17 51 unlocking map_in_progress
2011-08-03 21:36:25.518753 4c9df950 osd17 51 _recover_now defer until 2011-08-03 21:36:35.791983
2011-08-03 21:36:25.531823 4a9db950 -- 192.168.0.118:6804/4783 <== osd7 192.168.0.108:6802/4694 14 ==== osd_ping(e51 as_of 51 heartbeat) v1 ==== 61+0+0 (1541629889 0 0) 0x2efe000 con 0x26d4280
2011-08-03 21:36:25.531847 4a9db950 osd17 51 heartbeat_dispatch 0x2efe000
2011-08-03 21:36:25.531878 4a9db950 osd17 51 handle_osd_ping osd7 192.168.0.108:6802/4694 took stat stat(2011-08-03 21:36:25.496263 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2011-08-03 21:36:25.531903 4a9db950 osd17 51 take_peer_stat peer osd7 stat(2011-08-03 21:36:25.496263 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2011-08-03 21:36:25.531910 4a9db950 osd17 51 note_peer_epoch osd7 has 51
2011-08-03 21:36:25.531917 4a9db950 osd17 51 _share_map_outgoing osd7 192.168.0.108:6801/4694 already has epoch 51
2011-08-03 21:36:25.546196 4a9db950 -- 192.168.0.118:6804/4783 <== osd5 192.168.0.106:6802/4590 15 ==== osd_ping(e49 as_of 49 heartbeat) v1 ==== 61+0+0 (3703573556 0 0) 0x268ce00 con 0x2eb98c0
2011-08-03 21:36:25.546219 4a9db950 osd17 51 heartbeat_dispatch 0x268ce00
2011-08-03 21:36:25.546249 4a9db950 osd17 51 handle_osd_ping osd5 192.168.0.106:6802/4590 took stat stat(2011-08-03 21:36:25.517467 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2011-08-03 21:36:25.546265 4a9db950 osd17 51 take_peer_stat peer osd5 stat(2011-08-03 21:36:25.517467 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2011-08-03 21:36:25.546273 4a9db950 osd17 51 note_peer_epoch osd5 has 51 >= 49
2011-08-03 21:36:25.546278 4a9db950 osd17 51 _share_map_outgoing osd5 192.168.0.106:6801/4590 already has epoch 51
2011-08-03 21:36:25.572975 4a9db950 -- 192.168.0.118:6804/4783 <== osd6 192.168.0.107:6802/4620 15 ==== osd_ping(e49 as_of 49 heartbeat) v1 ==== 61+0+0 (1836095788 0 0) 0x268c380 con 0x2a0c280
2011-08-03 21:36:25.572998 4a9db950 osd17 51 heartbeat_dispatch 0x268c380
2011-08-03 21:36:25.573029 4a9db950 osd17 51 handle_osd_ping osd6 192.168.0.107:6802/4620 took stat stat(2011-08-03 21:36:25.522271 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2011-08-03 21:36:25.573045 4a9db950 osd17 51 take_peer_stat peer osd6 stat(2011-08-03 21:36:25.522271 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2011-08-03 21:36:25.573052 4a9db950 osd17 51 note_peer_epoch osd6 has 51 >= 49
2011-08-03 21:36:25.573058 4a9db950 osd17 51 _share_map_outgoing osd6 192.168.0.107:6801/4620 already has epoch 51
2011-08-03 21:36:25.618875 4a9db950 -- 192.168.0.118:6804/4783 <== osd14 192.168.0.115:6804/4741 4 ==== osd_ping(e51 as_of 51 heartbeat) v1 ==== 61+0+0 (3137050305 0 0) 0x2d00540 con 0x2e49280
2011-08-03 21:36:25.618898 4a9db950 osd17 51 heartbeat_dispatch 0x2d00540
2011-08-03 21:36:25.618929 4a9db950 osd17 51 handle_osd_ping osd14 192.168.0.115:6804/4741 took stat stat(2011-08-03 21:36:25.572822 oprate=7.35231 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2011-08-03 21:36:25.618946 4a9db950 osd17 51 take_peer_stat peer osd14 stat(2011-08-03 21:36:25.572822 oprate=7.35231 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2011-08-03 21:36:25.618953 4a9db950 osd17 51 note_peer_epoch osd14 has 51 >= 51
2011-08-03 21:36:25.618959 4a9db950 osd17 51 _share_map_outgoing osd14 192.168.0.115:6803/4741 already has epoch 51
2011-08-03 21:36:25.708533 4a9db950 -- 192.168.0.118:6804/4783 <== osd1 192.168.0.102:6802/4393 15 ==== osd_ping(e50 as_of 50 heartbeat) v1 ==== 61+0+0 (1942694636 0 0) 0x304ba80 con 0x26f1140
2011-08-03 21:36:25.708557 4a9db950 osd17 51 heartbeat_dispatch 0x304ba80
2011-08-03 21:36:25.708589 4a9db950 osd17 51 handle_osd_ping osd1 192.168.0.102:6802/4393 took stat stat(2011-08-03 21:36:25.681013 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2011-08-03 21:36:25.708605 4a9db950 osd17 51 take_peer_stat peer osd1 stat(2011-08-03 21:36:25.681013 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2011-08-03 21:36:25.708612 4a9db950 osd17 51 note_peer_epoch osd1 has 51 >= 50
2011-08-03 21:36:25.708618 4a9db950 osd17 51 _share_map_outgoing osd1 192.168.0.102:6801/4393 already has epoch 51
2011-08-03 21:36:25.793374 4a9db950 -- 192.168.0.118:6804/4783 <== osd16 192.168.0.117:6804/4634 14 ==== osd_ping(e51 as_of 51 heartbeat) v1 ==== 61+0+0 (3213721843 0 0) 0x2d00e00 con 0x26c9500
2011-08-03 21:36:25.793398 4a9db950 osd17 51 heartbeat_dispatch 0x2d00e00
2011-08-03 21:36:25.793428 4a9db950 osd17 51 handle_osd_ping osd16 192.168.0.117:6804/4634 took stat stat(2011-08-03 21:36:25.755083 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2011-08-03 21:36:25.793448 4a9db950 osd17 51 take_peer_stat peer osd16 stat(2011-08-03 21:36:25.755083 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2011-08-03 21:36:25.793453 4a9db950 osd17 51 note_peer_epoch osd16 has 51 >= 51
2011-08-03 21:36:25.793459 4a9db950 osd17 51 _share_map_outgoing osd16 192.168.0.117:6803/4634 already has epoch 51
2011-08-03 21:36:25.874918 4a9db950 -- 192.168.0.118:6804/4783 <== osd2 192.168.0.103:6802/4498 14 ==== osd_ping(e50 as_of 50 heartbeat) v1 ==== 61+0+0 (3129756903 0 0) 0x304b540 con 0x2eb9280
2011-08-03 21:36:25.874941 4a9db950 osd17 51 heartbeat_dispatch 0x304b540
2011-08-03 21:36:25.874972 4a9db950 osd17 51 handle_osd_ping osd2 192.168.0.103:6802/4498 took stat stat(2011-08-03 21:36:25.833873 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2011-08-03 21:36:25.874987 4a9db950 osd17 51 take_peer_stat peer osd2 stat(2011-08-03 21:36:25.833873 oprate=0 qlen=0 recent_qlen=0 rdlat=0 / 0 fshedin=0)
2011-08-03 21:36:25.874994 4a9db950 osd17 51 note_peer_epoch osd2 has 51 >= 50
2011-08-03 21:36:25.875000 4a9db950 osd17 51 _share_map_outgoing osd2 192.168.0.103:6801/4498 already has epoch 51
2011-08-03 21:36:25.973901 449cf950 osd17 51 tick
2011-08-03 21:36:25.973968 449cf950 osd17 51 scrub_should_schedule loadavg 0.07 < max 0.5 = no, randomly backing off
2011-08-03 21:36:25.974012 4a1da950 osd17 51 _dispatch 0x29d5c40 PGnot v1
2011-08-03 21:36:25.974031 4a1da950 osd17 51 handle_pg_notify from osd7
2011-08-03 21:36:25.974039 4a1da950 osd17 51 require_same_or_newer_map 45 (i am 51) 0x29d5c40
2011-08-03 21:36:25.974056 4c9df950 osd17 51 _recover_now defer until 2011-08-03 21:36:35.791983
2011-08-03 21:36:25.974131 4a1da950 osd17 51 pg[0.2( v 10'2 lc 0'0 (0'0,10'2] n=2 ec=2 les/c 26/11 39/39/39) [17,7] r=0 mlcod 0'0 !hml active m=2] handle_notify 0.2( v 10'2 (0'0,10'2] n=2 ec=2 les/c 26/11 39/39/39) from osd7
2011-08-03 21:36:25.974175 4a1da950 osd17 51 pg[0.2( v 10'2 lc 0'0 (0'0,10'2] n=2 ec=2 les/c 26/11 39/39/39) [17,7] r=0 mlcod 0'0 !hml active m=2] state<Started/Primary/Active>: Active: got notify from 7, already have info from that osd, ignoring
2011-08-03 21:36:25.974194 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2a7d0f0/0x2aa5b68
2011-08-03 21:36:25.974209 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8172 0x2cd1e60
2011-08-03 21:36:25.974223 4a1da950 filestore(/data/osd17) _do_transaction on 0x2cd1e60
2011-08-03 21:36:25.974251 4a1da950 osd17 51 pg[2.0( empty n=0 ec=2 les/c 26/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] handle_notify 2.0( empty n=0 ec=2 les/c 26/35 39/39/39) from osd7
2011-08-03 21:36:25.974266 4a1da950 osd17 51 pg[2.0( empty n=0 ec=2 les/c 26/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 7, already have info from that osd, ignoring
2011-08-03 21:36:25.974272 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2a7d2d0/0x2aa0b68
2011-08-03 21:36:25.974276 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8173 0x2cd11e0
2011-08-03 21:36:25.974280 4a1da950 filestore(/data/osd17) _do_transaction on 0x2cd11e0
2011-08-03 21:36:25.974293 4a1da950 osd17 51 pg[1.1p17( empty n=0 ec=2 les/c 26/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] handle_notify 1.1p17( empty n=0 ec=2 les/c 26/35 39/39/39) from osd7
2011-08-03 21:36:25.974303 4a1da950 osd17 51 pg[1.1p17( empty n=0 ec=2 les/c 26/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 7, already have info from that osd, ignoring
2011-08-03 21:36:25.974309 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2ab4e10/0x2aacb68
2011-08-03 21:36:25.974313 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8174 0x2cd1280
2011-08-03 21:36:25.974317 4a1da950 filestore(/data/osd17) _do_transaction on 0x2cd1280
2011-08-03 21:36:25.974330 4a1da950 osd17 51 pg[1.1( empty n=0 ec=2 les/c 26/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] handle_notify 1.1( empty n=0 ec=2 les/c 26/35 39/39/39) from osd7
2011-08-03 21:36:25.974349 4a1da950 osd17 51 pg[1.1( empty n=0 ec=2 les/c 26/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 7, already have info from that osd, ignoring
2011-08-03 21:36:25.974355 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2ab4c30/0x2ab0b68
2011-08-03 21:36:25.974360 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8175 0x2cd1780
2011-08-03 21:36:25.974364 4a1da950 filestore(/data/osd17) _do_transaction on 0x2cd1780
2011-08-03 21:36:25.974376 4a1da950 osd17 51 pg[2.0p17( empty n=0 ec=2 les/c 10/35 39/39/39) [17,7] r=0 mlcod 0'0 !hml peering] handle_notify 2.0p17( empty n=0 ec=2 les/c 10/35 39/39/39) from osd7
2011-08-03 21:36:25.974389 4a1da950 osd17 51 pg[2.0p17( empty n=0 ec=2 les/c 10/35 39/39/39) [17,7] r=0 mlcod 0'0 !hml peering] state<Started/Primary>: handle_pg_notify from osd7
2011-08-03 21:36:25.974402 4a1da950 osd17 51 pg[2.0p17( empty n=0 ec=2 les/c 10/35 39/39/39) [17,7] r=0 mlcod 0'0 !hml peering] state<Started/Primary>: pg[2.0p17( empty n=0 ec=2 les/c 10/35 39/39/39) [17,7] r=0 mlcod 0'0 !hml peering] got dup osd7 info 2.0p17( empty n=0 ec=2 les/c 10/35 39/39/39), identical to ours
2011-08-03 21:36:25.974408 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2ab4a50/0x2ab2b68
2011-08-03 21:36:25.974412 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8176 0x2ca91e0
2011-08-03 21:36:25.974416 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca91e0
2011-08-03 21:36:25.974429 4a1da950 osd17 51 pg[1.2dc( empty n=0 ec=2 les/c 26/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] handle_notify 1.2dc( empty n=0 ec=2 les/c 26/35 39/39/39) from osd7
2011-08-03 21:36:25.974440 4a1da950 osd17 51 pg[1.2dc( empty n=0 ec=2 les/c 26/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 7, already have info from that osd, ignoring
2011-08-03 21:36:25.974446 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2ab4870/0x2ab3b68
2011-08-03 21:36:25.974450 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8177 0x2ca93c0
2011-08-03 21:36:25.974454 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca93c0
2011-08-03 21:36:25.974467 4a1da950 osd17 51 pg[0.2dd( v 10'5 lc 0'0 (10'3,10'5]+backlog n=5 ec=2 les/c 10/11 39/39/39) [17,7] r=0 mlcod 0'0 !hml active m=5] handle_notify 0.2dd( v 10'5 (10'3,10'5]+backlog n=5 ec=2 les/c 10/11 39/39/39) from osd7
2011-08-03 21:36:25.974479 4a1da950 osd17 51 pg[0.2dd( v 10'5 lc 0'0 (10'3,10'5]+backlog n=5 ec=2 les/c 10/11 39/39/39) [17,7] r=0 mlcod 0'0 !hml active m=5] state<Started/Primary/Active>: Active: got notify from 7, already have info from that osd, ignoring
2011-08-03 21:36:25.974484 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2ab4690/0x2abbb68
2011-08-03 21:36:25.974489 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8178 0x2ca9320
2011-08-03 21:36:25.974492 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9320
2011-08-03 21:36:25.974807 4a1da950 osd17 51 pg[2.2db( empty n=0 ec=2 les/c 28/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] handle_notify 2.2db( empty n=0 ec=2 les/c 28/35 39/39/39) from osd7
2011-08-03 21:36:25.974823 4a1da950 osd17 51 pg[2.2db( empty n=0 ec=2 les/c 28/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 7, already have info from that osd, ignoring
2011-08-03 21:36:25.974829 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2ab44b0/0x2a8cb68
2011-08-03 21:36:25.974834 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8179 0x302a000
2011-08-03 21:36:25.974838 4a1da950 filestore(/data/osd17) _do_transaction on 0x302a000
2011-08-03 21:36:25.974854 4a1da950 osd17 51 pg[0.28c( v 10'5 lc 0'0 (10'3,10'5]+backlog n=5 ec=2 les/c 10/11 39/39/39) [17,7] r=0 mlcod 0'0 !hml active m=5] handle_notify 0.28c( v 10'5 (10'3,10'5]+backlog n=5 ec=2 les/c 10/11 39/39/39) from osd7
2011-08-03 21:36:25.974872 4a1da950 osd17 51 pg[0.28c( v 10'5 lc 0'0 (10'3,10'5]+backlog n=5 ec=2 les/c 10/11 39/39/39) [17,7] r=0 mlcod 0'0 !hml active m=5] state<Started/Primary/Active>: Active: got notify from 7, already have info from that osd, ignoring
2011-08-03 21:36:25.974878 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2ab43c0/0x2abdb68
2011-08-03 21:36:25.974883 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8180 0x2ca9f00
2011-08-03 21:36:25.974887 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9f00
2011-08-03 21:36:25.974898 4a1da950 osd17 51 pg[1.28b( empty n=0 ec=2 les/c 26/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] handle_notify 1.28b( empty n=0 ec=2 les/c 26/35 39/39/39) from osd7
2011-08-03 21:36:25.974909 4a1da950 osd17 51 pg[1.28b( empty n=0 ec=2 les/c 26/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 7, already have info from that osd, ignoring
2011-08-03 21:36:25.974914 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2ab41e0/0x2ac1b68
2011-08-03 21:36:25.974919 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8181 0x2ca9500
2011-08-03 21:36:25.974923 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9500
2011-08-03 21:36:25.974935 4a1da950 osd17 51 pg[2.28a( empty n=0 ec=2 les/c 28/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] handle_notify 2.28a( empty n=0 ec=2 les/c 28/35 39/39/39) from osd7
2011-08-03 21:36:25.974946 4a1da950 osd17 51 pg[2.28a( empty n=0 ec=2 les/c 28/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 7, already have info from that osd, ignoring
2011-08-03 21:36:25.974952 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2ac8f00/0x2ac2b68
2011-08-03 21:36:25.974956 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8182 0x2ca9e60
2011-08-03 21:36:25.974960 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9e60
2011-08-03 21:36:25.974972 4a1da950 osd17 51 pg[2.281( empty n=0 ec=2 les/c 27/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] handle_notify 2.281( empty n=0 ec=2 les/c 27/35 39/39/39) from osd7
2011-08-03 21:36:25.974982 4a1da950 osd17 51 pg[2.281( empty n=0 ec=2 les/c 27/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 7, already have info from that osd, ignoring
2011-08-03 21:36:25.974988 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2bf51e0/0x2c13b68
2011-08-03 21:36:25.974992 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8183 0x2ca9780
2011-08-03 21:36:25.974996 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9780
2011-08-03 21:36:25.975008 4a1da950 osd17 51 pg[1.282( empty n=0 ec=2 les/c 26/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] handle_notify 1.282( empty n=0 ec=2 les/c 26/35 39/39/39) from osd7
2011-08-03 21:36:25.975019 4a1da950 osd17 51 pg[1.282( empty n=0 ec=2 les/c 26/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 7, already have info from that osd, ignoring
2011-08-03 21:36:25.975025 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2bf53c0/0x2c0fb68
2011-08-03 21:36:25.975029 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8184 0x2ca9c80
2011-08-03 21:36:25.975033 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9c80
2011-08-03 21:36:25.975048 4a1da950 osd17 51 pg[0.283( v 10'3 lc 0'0 (10'1,10'3]+backlog n=3 ec=2 les/c 10/11 39/39/39) [17,7] r=0 mlcod 0'0 !hml active m=3] handle_notify 0.283( v 10'3 (10'1,10'3]+backlog n=3 ec=2 les/c 10/11 39/39/39) from osd7
2011-08-03 21:36:25.975059 4a1da950 osd17 51 pg[0.283( v 10'3 lc 0'0 (10'1,10'3]+backlog n=3 ec=2 les/c 10/11 39/39/39) [17,7] r=0 mlcod 0'0 !hml active m=3] state<Started/Primary/Active>: Active: got notify from 7, already have info from that osd, ignoring
2011-08-03 21:36:25.975065 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2bf55a0/0x2c0eb68
2011-08-03 21:36:25.975075 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8185 0x2ca9dc0
2011-08-03 21:36:25.975079 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9dc0
2011-08-03 21:36:25.975095 4a1da950 osd17 51 pg[0.22b( v 10'1 lc 0'0 (0'0,10'1] n=1 ec=2 les/c 28/11 39/39/39) [17,5] r=0 mlcod 0'0 !hml active m=1] handle_notify 0.22b( v 10'1 (0'0,10'1] n=1 ec=2 les/c 10/11 39/39/39) from osd7
2011-08-03 21:36:25.975107 4a1da950 osd17 51 pg[0.22b( v 10'1 lc 0'0 (0'0,10'1] n=1 ec=2 les/c 28/11 39/39/39) [17,5] r=0 mlcod 0'0 !hml active m=1] state<Started/Primary/Active>: Active: got notify from 7, already have info from that osd, ignoring
2011-08-03 21:36:25.975113 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2ac83c0/0x2ad1b68
2011-08-03 21:36:25.975117 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8186 0x2ca9b40
2011-08-03 21:36:25.975121 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9b40
2011-08-03 21:36:25.975132 4a1da950 osd17 51 pg[2.47d( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean] handle_notify 2.47d( empty n=0 ec=2 les/c 10/35 39/39/39) from osd7
2011-08-03 21:36:25.975143 4a1da950 osd17 51 pg[2.47d( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 7, calling proc_replica_info and discover_all_missing
2011-08-03 21:36:25.975153 4a1da950 osd17 51 pg[2.47d( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean]  got osd7 2.47d( empty n=0 ec=2 les/c 10/35 39/39/39)
2011-08-03 21:36:25.975169 4a1da950 osd17 51 pg[2.47d( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean]  osd7 has stray content: 2.47d( empty n=0 ec=2 les/c 10/35 39/39/39)
2011-08-03 21:36:25.975180 4a1da950 osd17 51 pg[2.47d( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean] purge_strays 7
2011-08-03 21:36:25.975189 4a1da950 osd17 51 pg[2.47d( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean] sending PGRemove to osd7
2011-08-03 21:36:25.975207 4a1da950 osd17 51 pg[2.47d( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean] update_stats 39'60
2011-08-03 21:36:25.975214 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2a2e780/0x26a5b68
2011-08-03 21:36:25.975219 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8187 0x2ca9aa0
2011-08-03 21:36:25.975223 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9aa0
2011-08-03 21:36:25.975236 4a1da950 osd17 51 pg[1.47e( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean] handle_notify 1.47e( empty n=0 ec=2 les/c 10/35 39/39/39) from osd7
2011-08-03 21:36:25.975246 4a1da950 osd17 51 pg[1.47e( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 7, calling proc_replica_info and discover_all_missing
2011-08-03 21:36:25.975256 4a1da950 osd17 51 pg[1.47e( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean]  got osd7 1.47e( empty n=0 ec=2 les/c 10/35 39/39/39)
2011-08-03 21:36:25.975269 4a1da950 osd17 51 pg[1.47e( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean]  osd7 has stray content: 1.47e( empty n=0 ec=2 les/c 10/35 39/39/39)
2011-08-03 21:36:25.975279 4a1da950 osd17 51 pg[1.47e( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean] purge_strays 7
2011-08-03 21:36:25.975288 4a1da950 osd17 51 pg[1.47e( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean] sending PGRemove to osd7
2011-08-03 21:36:25.975298 4a1da950 osd17 51 pg[1.47e( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean] update_stats 39'60
2011-08-03 21:36:25.975304 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2bb85a0/0x26a6b68
2011-08-03 21:36:25.975308 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8188 0x2ca9280
2011-08-03 21:36:25.975318 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9280
2011-08-03 21:36:25.975334 4a1da950 osd17 51 pg[0.47f( v 10'1 lc 0'0 (0'0,10'1] n=1 ec=2 les/c 10/10 39/39/39) [17,16] r=0 mlcod 0'0 !hml active m=1] handle_notify 0.47f( v 10'1 (0'0,10'1] n=1 ec=2 les/c 10/10 39/39/39) from osd7
2011-08-03 21:36:25.975346 4a1da950 osd17 51 pg[0.47f( v 10'1 lc 0'0 (0'0,10'1] n=1 ec=2 les/c 10/10 39/39/39) [17,16] r=0 mlcod 0'0 !hml active m=1] state<Started/Primary/Active>: Active: got notify from 7, already have info from that osd, ignoring
2011-08-03 21:36:25.975352 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2c80b40/0x26a7b68
2011-08-03 21:36:25.975356 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8189 0x2ca9be0
2011-08-03 21:36:25.975360 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9be0
2011-08-03 21:36:25.975372 4a1da950 osd17 51 pg[0.11a( empty n=0 ec=2 les/c 10/11 49/49/39) [17,14] r=0 mlcod 0'0 !hml peering] get_or_create_pg acting changed in 49 (msg from 45)
2011-08-03 21:36:25.975383 4a1da950 osd17 51 pg[2.118( empty n=0 ec=2 les/c 10/11 49/49/39) [17,14] r=0 mlcod 0'0 !hml peering] get_or_create_pg acting changed in 49 (msg from 45)
2011-08-03 21:36:25.975392 4a1da950 osd17 51 pg[1.119( empty n=0 ec=2 les/c 10/11 49/49/39) [17,14] r=0 mlcod 0'0 !hml peering] get_or_create_pg acting changed in 49 (msg from 45)
2011-08-03 21:36:25.975405 4a1da950 osd17 51 pg[0.3fb( v 10'4 lc 0'0 (10'2,10'4]+backlog n=4 ec=3 les/c 7/11 39/39/39) [17,16] r=0 mlcod 0'0 !hml active m=4] handle_notify 0.3fb( v 10'4 (10'2,10'4]+backlog n=4 ec=3 les/c 7/11 39/39/39) from osd7
2011-08-03 21:36:25.975417 4a1da950 osd17 51 pg[0.3fb( v 10'4 lc 0'0 (10'2,10'4]+backlog n=4 ec=3 les/c 7/11 39/39/39) [17,16] r=0 mlcod 0'0 !hml active m=4] state<Started/Primary/Active>: Active: got notify from 7, already have info from that osd, ignoring
2011-08-03 21:36:25.975423 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2b7ca50/0x2b7bb68
2011-08-03 21:36:25.975427 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8190 0x2ca9640
2011-08-03 21:36:25.975431 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9640
2011-08-03 21:36:25.975442 4a1da950 osd17 51 pg[1.eb( empty n=0 ec=2 les/c 38/46 49/49/39) [17,10] r=0 mlcod 0'0 !hml peering] get_or_create_pg acting changed in 49 (msg from 45)
2011-08-03 21:36:25.975452 4a1da950 osd17 51 pg[2.ea( empty n=0 ec=2 les/c 38/46 49/49/39) [17,10] r=0 mlcod 0'0 !hml peering] get_or_create_pg acting changed in 49 (msg from 45)
2011-08-03 21:36:25.975463 4a1da950 osd17 51 pg[1.3d0( v 10'1 lc 0'0 (0'0,10'1] n=1 ec=2 les/c 38/11 49/49/39) [17,10] r=0 mlcod 0'0 !hml peering m=1 u=1] get_or_create_pg acting changed in 49 (msg from 45)
2011-08-03 21:36:25.975472 4a1da950 osd17 51 pg[0.3d1( empty n=0 ec=2 les/c 10/11 49/49/39) [17,10] r=0 mlcod 0'0 !hml peering] get_or_create_pg acting changed in 49 (msg from 45)
2011-08-03 21:36:25.975483 4a1da950 osd17 51 pg[1.37c( v 10'1 lc 0'0 (0'0,10'1] n=1 ec=2 les/c 29/11 49/49/39) [17,14] r=0 mlcod 0'0 !hml peering m=1 u=1] get_or_create_pg acting changed in 49 (msg from 45)
2011-08-03 21:36:25.975493 4a1da950 osd17 51 pg[2.37b( empty n=0 ec=2 les/c 38/47 49/49/39) [17,14] r=0 mlcod 0'0 !hml peering] get_or_create_pg acting changed in 49 (msg from 45)
2011-08-03 21:36:25.975502 4a1da950 osd17 51 pg[0.354( empty n=0 ec=2 les/c 10/10 50/50/39) [17,11] r=0 mlcod 0'0 !hml peering] get_or_create_pg acting changed in 50 (msg from 45)
2011-08-03 21:36:25.975513 4a1da950 osd17 51 pg[1.50( empty n=0 ec=2 les/c 10/51 39/39/39) [17,2] r=0 mlcod 0'0 !hml active+clean] handle_notify 1.50( empty n=0 ec=2 les/c 10/35 39/39/39) from osd7
2011-08-03 21:36:25.975523 4a1da950 osd17 51 pg[1.50( empty n=0 ec=2 les/c 10/51 39/39/39) [17,2] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 7, calling proc_replica_info and discover_all_missing
2011-08-03 21:36:25.975533 4a1da950 osd17 51 pg[1.50( empty n=0 ec=2 les/c 10/51 39/39/39) [17,2] r=0 mlcod 0'0 !hml active+clean]  got osd7 1.50( empty n=0 ec=2 les/c 10/35 39/39/39)
2011-08-03 21:36:25.975552 4a1da950 osd17 51 pg[1.50( empty n=0 ec=2 les/c 10/51 39/39/39) [17,2] r=0 mlcod 0'0 !hml active+clean]  osd7 has stray content: 1.50( empty n=0 ec=2 les/c 10/35 39/39/39)
2011-08-03 21:36:25.975562 4a1da950 osd17 51 pg[1.50( empty n=0 ec=2 les/c 10/51 39/39/39) [17,2] r=0 mlcod 0'0 !hml active+clean] purge_strays 7
2011-08-03 21:36:25.975571 4a1da950 osd17 51 pg[1.50( empty n=0 ec=2 les/c 10/51 39/39/39) [17,2] r=0 mlcod 0'0 !hml active+clean] sending PGRemove to osd7
2011-08-03 21:36:25.975581 4a1da950 osd17 51 pg[1.50( empty n=0 ec=2 les/c 10/51 39/39/39) [17,2] r=0 mlcod 0'0 !hml active+clean] update_stats 39'63
2011-08-03 21:36:25.975587 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2b93f00/0x2bebb68
2011-08-03 21:36:25.975591 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8191 0x2ca96e0
2011-08-03 21:36:25.975595 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca96e0
2011-08-03 21:36:25.975611 4a1da950 osd17 51 pg[0.51( v 10'2 lc 0'0 (0'0,10'2] n=2 ec=2 les/c 10/10 39/39/39) [17,2] r=0 mlcod 0'0 !hml active m=2] handle_notify 0.51( v 10'2 (0'0,10'2] n=2 ec=2 les/c 10/10 39/39/39) from osd7
2011-08-03 21:36:25.975623 4a1da950 osd17 51 pg[0.51( v 10'2 lc 0'0 (0'0,10'2] n=2 ec=2 les/c 10/10 39/39/39) [17,2] r=0 mlcod 0'0 !hml active m=2] state<Started/Primary/Active>: Active: got notify from 7, already have info from that osd, ignoring
2011-08-03 21:36:25.975629 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2bf5e10/0x2befb68
2011-08-03 21:36:25.975633 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8192 0x2ca9a00
2011-08-03 21:36:25.975637 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9a00
2011-08-03 21:36:25.975647 4a1da950 osd17 51 pg[1.353( empty n=0 ec=2 les/c 48/49 50/50/39) [17,11] r=0 mlcod 0'0 !hml peering] get_or_create_pg acting changed in 50 (msg from 45)
2011-08-03 21:36:25.975656 4a1da950 osd17 51 pg[2.352( empty n=0 ec=2 les/c 48/49 50/50/39) [17,11] r=0 mlcod 0'0 !hml peering] get_or_create_pg acting changed in 50 (msg from 45)
2011-08-03 21:36:25.975667 4a1da950 osd17 51 pg[2.4f( empty n=0 ec=2 les/c 10/51 39/39/39) [17,2] r=0 mlcod 0'0 !hml active+clean] handle_notify 2.4f( empty n=0 ec=2 les/c 10/35 39/39/39) from osd7
2011-08-03 21:36:25.975678 4a1da950 osd17 51 pg[2.4f( empty n=0 ec=2 les/c 10/51 39/39/39) [17,2] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 7, calling proc_replica_info and discover_all_missing
2011-08-03 21:36:25.975688 4a1da950 osd17 51 pg[2.4f( empty n=0 ec=2 les/c 10/51 39/39/39) [17,2] r=0 mlcod 0'0 !hml active+clean]  got osd7 2.4f( empty n=0 ec=2 les/c 10/35 39/39/39)
2011-08-03 21:36:25.975700 4a1da950 osd17 51 pg[2.4f( empty n=0 ec=2 les/c 10/51 39/39/39) [17,2] r=0 mlcod 0'0 !hml active+clean]  osd7 has stray content: 2.4f( empty n=0 ec=2 les/c 10/35 39/39/39)
2011-08-03 21:36:25.975710 4a1da950 osd17 51 pg[2.4f( empty n=0 ec=2 les/c 10/51 39/39/39) [17,2] r=0 mlcod 0'0 !hml active+clean] purge_strays 7
2011-08-03 21:36:25.975719 4a1da950 osd17 51 pg[2.4f( empty n=0 ec=2 les/c 10/51 39/39/39) [17,2] r=0 mlcod 0'0 !hml active+clean] sending PGRemove to osd7
2011-08-03 21:36:25.975729 4a1da950 osd17 51 pg[2.4f( empty n=0 ec=2 les/c 10/51 39/39/39) [17,2] r=0 mlcod 0'0 !hml active+clean] update_stats 39'60
2011-08-03 21:36:25.975734 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2bf5d20/0x2bf0b68
2011-08-03 21:36:25.975739 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8193 0x2ca9960
2011-08-03 21:36:25.975743 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9960
2011-08-03 21:36:25.975754 4a1da950 osd17 51 pg[2.34c( empty n=0 ec=2 les/c 10/46 49/49/39) [17,18] r=0 mlcod 0'0 !hml peering] get_or_create_pg acting changed in 49 (msg from 45)
2011-08-03 21:36:25.975764 4a1da950 osd17 51 pg[1.34d( empty n=0 ec=2 les/c 10/46 49/49/39) [17,18] r=0 mlcod 0'0 !hml peering] get_or_create_pg acting changed in 49 (msg from 45)
2011-08-03 21:36:25.975783 4a1da950 osd17 51 pg[0.34a( v 10'5 lc 0'0 (0'0,10'5] n=5 ec=2 les/c 26/11 39/39/39) [17,7] r=0 mlcod 0'0 !hml active m=5] handle_notify 0.34a( v 10'5 (0'0,10'5] n=5 ec=2 les/c 26/11 39/39/39) from osd7
2011-08-03 21:36:25.975795 4a1da950 osd17 51 pg[0.34a( v 10'5 lc 0'0 (0'0,10'5] n=5 ec=2 les/c 26/11 39/39/39) [17,7] r=0 mlcod 0'0 !hml active m=5] state<Started/Primary/Active>: Active: got notify from 7, already have info from that osd, ignoring
2011-08-03 21:36:25.975801 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2ca4f00/0x2c9bb68
2011-08-03 21:36:25.975805 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8194 0x2ca9820
2011-08-03 21:36:25.975809 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9820
2011-08-03 21:36:25.975822 4a1da950 osd17 51 pg[2.348( empty n=0 ec=2 les/c 10/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] handle_notify 2.348( empty n=0 ec=2 les/c 10/35 39/39/39) from osd7
2011-08-03 21:36:25.975832 4a1da950 osd17 51 pg[2.348( empty n=0 ec=2 les/c 10/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 7, already have info from that osd, ignoring
2011-08-03 21:36:25.975838 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2ca4d20/0x2ca2b68
2011-08-03 21:36:25.975842 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8195 0x2ca98c0
2011-08-03 21:36:25.975846 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca98c0
2011-08-03 21:36:25.975858 4a1da950 osd17 51 pg[1.349( empty n=0 ec=2 les/c 28/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] handle_notify 1.349( empty n=0 ec=2 les/c 28/35 39/39/39) from osd7
2011-08-03 21:36:25.975868 4a1da950 osd17 51 pg[1.349( empty n=0 ec=2 les/c 28/51 39/39/39) [17,7] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 7, already have info from that osd, ignoring
2011-08-03 21:36:25.975874 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2ca4b40/0x2c9fb68
2011-08-03 21:36:25.975878 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8196 0x2cd1e60
2011-08-03 21:36:25.975882 4a1da950 filestore(/data/osd17) _do_transaction on 0x2cd1e60
2011-08-03 21:36:25.975890 4a1da950 osd17 51 kick_pg_split_queue
2011-08-03 21:36:25.975910 4a1da950 -- 192.168.0.118:6803/4783 <== osd4 192.168.0.105:6801/4542 13 ==== PGnot v1 ==== 5244+0+0 (2950194561 0 0) 0x2dc2e00 con 0x26c9a00
2011-08-03 21:36:25.975915 4a1da950 osd17 51 _dispatch 0x2dc2e00 PGnot v1
2011-08-03 21:36:25.975919 4a1da950 osd17 51 handle_pg_notify from osd4
2011-08-03 21:36:25.975923 4a1da950 osd17 51 require_same_or_newer_map 46 (i am 51) 0x2dc2e00
2011-08-03 21:36:25.975934 4a1da950 osd17 51 pg[2.1p17( empty n=0 ec=2 les/c 10/46 50/50/39) [17,11] r=0 mlcod 0'0 !hml peering] get_or_create_pg acting changed in 50 (msg from 46)
2011-08-03 21:36:25.975945 4a1da950 osd17 51 pg[1.288( empty n=0 ec=2 les/c 28/51 39/39/39) [17,9] r=0 mlcod 0'0 !hml active+clean] handle_notify 1.288( empty n=0 ec=2 les/c 10/10 39/39/39) from osd4
2011-08-03 21:36:25.975956 4a1da950 osd17 51 pg[1.288( empty n=0 ec=2 les/c 28/51 39/39/39) [17,9] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 4, calling proc_replica_info and discover_all_missing
2011-08-03 21:36:25.975966 4a1da950 osd17 51 pg[1.288( empty n=0 ec=2 les/c 28/51 39/39/39) [17,9] r=0 mlcod 0'0 !hml active+clean]  got osd4 1.288( empty n=0 ec=2 les/c 10/10 39/39/39)
2011-08-03 21:36:25.975978 4a1da950 osd17 51 pg[1.288( empty n=0 ec=2 les/c 28/51 39/39/39) [17,9] r=0 mlcod 0'0 !hml active+clean]  osd4 has stray content: 1.288( empty n=0 ec=2 les/c 10/10 39/39/39)
2011-08-03 21:36:25.975988 4a1da950 osd17 51 pg[1.288( empty n=0 ec=2 les/c 28/51 39/39/39) [17,9] r=0 mlcod 0'0 !hml active+clean] purge_strays 4
2011-08-03 21:36:25.975997 4a1da950 osd17 51 pg[1.288( empty n=0 ec=2 les/c 28/51 39/39/39) [17,9] r=0 mlcod 0'0 !hml active+clean] sending PGRemove to osd4
2011-08-03 21:36:25.976013 4a1da950 osd17 51 pg[1.288( empty n=0 ec=2 les/c 28/51 39/39/39) [17,9] r=0 mlcod 0'0 !hml active+clean] update_stats 39'58
2011-08-03 21:36:25.976020 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2ba7d20/0x2ba5b68
2011-08-03 21:36:25.976024 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8197 0x2cd11e0
2011-08-03 21:36:25.976028 4a1da950 filestore(/data/osd17) _do_transaction on 0x2cd11e0
2011-08-03 21:36:25.976042 4a1da950 osd17 51 pg[2.287( empty n=0 ec=2 les/c 10/35 39/39/39) [17,9] r=0 mlcod 0'0 !hml peering] handle_notify 2.287( empty n=0 ec=2 les/c 10/35 39/39/39) from osd4
2011-08-03 21:36:25.976053 4a1da950 osd17 51 pg[2.287( empty n=0 ec=2 les/c 10/35 39/39/39) [17,9] r=0 mlcod 0'0 !hml peering] state<Started/Primary>: handle_pg_notify from osd4
2011-08-03 21:36:25.976067 4a1da950 osd17 51 pg[2.287( empty n=0 ec=2 les/c 10/35 39/39/39) [17,9] r=0 mlcod 0'0 !hml peering] state<Started/Primary>: pg[2.287( empty n=0 ec=2 les/c 10/35 39/39/39) [17,9] r=0 mlcod 0'0 !hml peering] got dup osd4 info 2.287( empty n=0 ec=2 les/c 10/35 39/39/39), identical to ours
2011-08-03 21:36:25.976072 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2ba7b40/0x2ba6b68
2011-08-03 21:36:25.976077 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8198 0x2cd1280
2011-08-03 21:36:25.976081 4a1da950 filestore(/data/osd17) _do_transaction on 0x2cd1280
2011-08-03 21:36:25.976093 4a1da950 osd17 51 pg[2.25d( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean] handle_notify 2.25d( empty n=0 ec=2 les/c 10/35 39/39/39) from osd4
2011-08-03 21:36:25.976104 4a1da950 osd17 51 pg[2.25d( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 4, calling proc_replica_info and discover_all_missing
2011-08-03 21:36:25.976114 4a1da950 osd17 51 pg[2.25d( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean]  got osd4 2.25d( empty n=0 ec=2 les/c 10/35 39/39/39)
2011-08-03 21:36:25.976127 4a1da950 osd17 51 pg[2.25d( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean]  osd4 has stray content: 2.25d( empty n=0 ec=2 les/c 10/35 39/39/39)
2011-08-03 21:36:25.976136 4a1da950 osd17 51 pg[2.25d( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean] purge_strays 4
2011-08-03 21:36:25.976145 4a1da950 osd17 51 pg[2.25d( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean] sending PGRemove to osd4
2011-08-03 21:36:25.976155 4a1da950 osd17 51 pg[2.25d( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean] update_stats 39'63
2011-08-03 21:36:25.976161 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2bb8c30/0x2c16b68
2011-08-03 21:36:25.976165 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8199 0x2cd1780
2011-08-03 21:36:25.976169 4a1da950 filestore(/data/osd17) _do_transaction on 0x2cd1780
2011-08-03 21:36:25.976181 4a1da950 osd17 51 pg[1.25e( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean] handle_notify 1.25e( empty n=0 ec=2 les/c 10/35 39/39/39) from osd4
2011-08-03 21:36:25.976192 4a1da950 osd17 51 pg[1.25e( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 4, calling proc_replica_info and discover_all_missing
2011-08-03 21:36:25.976202 4a1da950 osd17 51 pg[1.25e( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean]  got osd4 1.25e( empty n=0 ec=2 les/c 10/35 39/39/39)
2011-08-03 21:36:25.976214 4a1da950 osd17 51 pg[1.25e( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean]  osd4 has stray content: 1.25e( empty n=0 ec=2 les/c 10/35 39/39/39)
2011-08-03 21:36:25.976223 4a1da950 osd17 51 pg[1.25e( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean] purge_strays 4
2011-08-03 21:36:25.976238 4a1da950 osd17 51 pg[1.25e( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean] sending PGRemove to osd4
2011-08-03 21:36:25.976249 4a1da950 osd17 51 pg[1.25e( empty n=0 ec=2 les/c 10/51 39/39/39) [17,16] r=0 mlcod 0'0 !hml active+clean] update_stats 39'63
2011-08-03 21:36:25.976254 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2bb8e10/0x2c15b68
2011-08-03 21:36:25.976259 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8200 0x2ca91e0
2011-08-03 21:36:25.976263 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca91e0
2011-08-03 21:36:25.976278 4a1da950 osd17 51 pg[0.226( v 10'3 lc 0'0 (0'0,10'3] n=3 ec=2 les/c 29/11 39/39/39) [17,5] r=0 mlcod 0'0 !hml active m=3] handle_notify 0.226( v 10'3 (0'0,10'3] n=3 ec=2 les/c 10/11 39/39/39) from osd4
2011-08-03 21:36:25.976290 4a1da950 osd17 51 pg[0.226( v 10'3 lc 0'0 (0'0,10'3] n=3 ec=2 les/c 29/11 39/39/39) [17,5] r=0 mlcod 0'0 !hml active m=3] state<Started/Primary/Active>: Active: got notify from 4, already have info from that osd, ignoring
2011-08-03 21:36:25.976296 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2ac8000/0x2ad2b68
2011-08-03 21:36:25.976300 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8201 0x2ca93c0
2011-08-03 21:36:25.976304 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca93c0
2011-08-03 21:36:25.976316 4a1da950 osd17 51 pg[1.4a7( empty n=0 ec=2 les/c 10/35 39/39/39) [17,4] r=0 mlcod 0'0 !hml peering] handle_notify 1.4a7( empty n=0 ec=2 les/c 10/35 39/39/39) from osd4
2011-08-03 21:36:25.976329 4a1da950 osd17 51 pg[1.4a7( empty n=0 ec=2 les/c 10/35 39/39/39) [17,4] r=0 mlcod 0'0 !hml peering]  got dup osd4 info 1.4a7( empty n=0 ec=2 les/c 10/35 39/39/39), identical to ours
2011-08-03 21:36:25.976335 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2b0b870/0x2bccb68
2011-08-03 21:36:25.976339 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8202 0x2ca9320
2011-08-03 21:36:25.976343 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9320
2011-08-03 21:36:25.976355 4a1da950 osd17 51 pg[2.4a6( empty n=0 ec=2 les/c 10/35 39/39/39) [17,4] r=0 mlcod 0'0 !hml peering] handle_notify 2.4a6( empty n=0 ec=2 les/c 10/35 39/39/39) from osd4
2011-08-03 21:36:25.976367 4a1da950 osd17 51 pg[2.4a6( empty n=0 ec=2 les/c 10/35 39/39/39) [17,4] r=0 mlcod 0'0 !hml peering]  got dup osd4 info 2.4a6( empty n=0 ec=2 les/c 10/35 39/39/39), identical to ours
2011-08-03 21:36:25.976373 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2b0b690/0x2bc8b68
2011-08-03 21:36:25.976377 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8203 0x2ca9f00
2011-08-03 21:36:25.976380 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9f00
2011-08-03 21:36:25.976391 4a1da950 osd17 51 pg[2.41d( empty n=0 ec=2 les/c 10/47 50/50/39) [17,15] r=0 mlcod 0'0 !hml peering] get_or_create_pg acting changed in 50 (msg from 46)
2011-08-03 21:36:25.976401 4a1da950 osd17 51 pg[1.41e( empty n=0 ec=2 les/c 10/47 50/50/39) [17,15] r=0 mlcod 0'0 !hml peering] get_or_create_pg acting changed in 50 (msg from 46)
2011-08-03 21:36:25.976412 4a1da950 osd17 51 pg[0.41f( v 10'2 lc 0'0 (0'0,10'2] n=2 ec=2 les/c 10/11 50/50/39) [17,15] r=0 mlcod 0'0 !hml peering m=2 u=2] get_or_create_pg acting changed in 50 (msg from 46)
2011-08-03 21:36:25.976421 4a1da950 osd17 51 pg[1.e8( empty n=0 ec=2 les/c 38/47 49/49/39) [17,14] r=0 mlcod 0'0 !hml peering] get_or_create_pg acting changed in 49 (msg from 46)
2011-08-03 21:36:25.976431 4a1da950 osd17 51 pg[2.e7( empty n=0 ec=2 les/c 38/47 49/49/39) [17,14] r=0 mlcod 0'0 !hml peering] get_or_create_pg acting changed in 49 (msg from 46)
2011-08-03 21:36:25.976441 4a1da950 osd17 51 pg[2.394( empty n=0 ec=2 les/c 10/51 39/39/39) [17,3] r=0 mlcod 0'0 !hml active+clean] handle_notify 2.394( empty n=0 ec=2 les/c 10/35 39/39/39) from osd4
2011-08-03 21:36:25.976452 4a1da950 osd17 51 pg[2.394( empty n=0 ec=2 les/c 10/51 39/39/39) [17,3] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 4, calling proc_replica_info and discover_all_missing
2011-08-03 21:36:25.976467 4a1da950 osd17 51 pg[2.394( empty n=0 ec=2 les/c 10/51 39/39/39) [17,3] r=0 mlcod 0'0 !hml active+clean]  got osd4 2.394( empty n=0 ec=2 les/c 10/35 39/39/39)
2011-08-03 21:36:25.976480 4a1da950 osd17 51 pg[2.394( empty n=0 ec=2 les/c 10/51 39/39/39) [17,3] r=0 mlcod 0'0 !hml active+clean]  osd4 has stray content: 2.394( empty n=0 ec=2 les/c 10/35 39/39/39)
2011-08-03 21:36:25.976490 4a1da950 osd17 51 pg[2.394( empty n=0 ec=2 les/c 10/51 39/39/39) [17,3] r=0 mlcod 0'0 !hml active+clean] purge_strays 4
2011-08-03 21:36:25.976499 4a1da950 osd17 51 pg[2.394( empty n=0 ec=2 les/c 10/51 39/39/39) [17,3] r=0 mlcod 0'0 !hml active+clean] sending PGRemove to osd4
2011-08-03 21:36:25.976509 4a1da950 osd17 51 pg[2.394( empty n=0 ec=2 les/c 10/51 39/39/39) [17,3] r=0 mlcod 0'0 !hml active+clean] update_stats 39'58
2011-08-03 21:36:25.976515 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2c301e0/0x2b48b68
2011-08-03 21:36:25.976519 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8204 0x2ca9500
2011-08-03 21:36:25.976523 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9500
2011-08-03 21:36:25.976538 4a1da950 osd17 51 pg[0.396( v 10'2 lc 0'0 (0'0,10'2] n=2 ec=2 les/c 10/11 39/39/39) [17,3] r=0 mlcod 0'0 !hml active m=2] handle_notify 0.396( v 10'2 (0'0,10'2] n=2 ec=2 les/c 10/11 39/39/39) from osd4
2011-08-03 21:36:25.976551 4a1da950 osd17 51 pg[0.396( v 10'2 lc 0'0 (0'0,10'2] n=2 ec=2 les/c 10/11 39/39/39) [17,3] r=0 mlcod 0'0 !hml active m=2] state<Started/Primary/Active>: Active: got notify from 4, already have info from that osd, ignoring
2011-08-03 21:36:25.976556 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2c30000/0x2c40b68
2011-08-03 21:36:25.976561 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8205 0x2ca9e60
2011-08-03 21:36:25.976564 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9e60
2011-08-03 21:36:25.976577 4a1da950 osd17 51 pg[1.395( empty n=0 ec=2 les/c 10/51 39/39/39) [17,3] r=0 mlcod 0'0 !hml active+clean] handle_notify 1.395( empty n=0 ec=2 les/c 10/35 39/39/39) from osd4
2011-08-03 21:36:25.976588 4a1da950 osd17 51 pg[1.395( empty n=0 ec=2 les/c 10/51 39/39/39) [17,3] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 4, calling proc_replica_info and discover_all_missing
2011-08-03 21:36:25.976598 4a1da950 osd17 51 pg[1.395( empty n=0 ec=2 les/c 10/51 39/39/39) [17,3] r=0 mlcod 0'0 !hml active+clean]  got osd4 1.395( empty n=0 ec=2 les/c 10/35 39/39/39)
2011-08-03 21:36:25.976611 4a1da950 osd17 51 pg[1.395( empty n=0 ec=2 les/c 10/51 39/39/39) [17,3] r=0 mlcod 0'0 !hml active+clean]  osd4 has stray content: 1.395( empty n=0 ec=2 les/c 10/35 39/39/39)
2011-08-03 21:36:25.976620 4a1da950 osd17 51 pg[1.395( empty n=0 ec=2 les/c 10/51 39/39/39) [17,3] r=0 mlcod 0'0 !hml active+clean] purge_strays 4
2011-08-03 21:36:25.976629 4a1da950 osd17 51 pg[1.395( empty n=0 ec=2 les/c 10/51 39/39/39) [17,3] r=0 mlcod 0'0 !hml active+clean] sending PGRemove to osd4
2011-08-03 21:36:25.976639 4a1da950 osd17 51 pg[1.395( empty n=0 ec=2 les/c 10/51 39/39/39) [17,3] r=0 mlcod 0'0 !hml active+clean] update_stats 39'58
2011-08-03 21:36:25.976645 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2c47e10/0x2c41b68
2011-08-03 21:36:25.976649 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8206 0x2ca9780
2011-08-03 21:36:25.976653 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9780
2011-08-03 21:36:25.976669 4a1da950 osd17 51 pg[0.4b( v 10'3 lc 0'0 (10'1,10'3]+backlog n=3 ec=2 les/c 10/10 39/39/39) [17,13] r=0 mlcod 0'0 !hml active m=3] handle_notify 0.4b( v 10'3 (10'1,10'3] n=3 ec=2 les/c 10/10 39/39/39) from osd4
2011-08-03 21:36:25.976680 4a1da950 osd17 51 pg[0.4b( v 10'3 lc 0'0 (10'1,10'3]+backlog n=3 ec=2 les/c 10/10 39/39/39) [17,13] r=0 mlcod 0'0 !hml active m=3] state<Started/Primary/Active>: Active: got notify from 4, already have info from that osd, ignoring
2011-08-03 21:36:25.976691 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2bf5960/0x2bf4b68
2011-08-03 21:36:25.976696 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8207 0x2ca9c80
2011-08-03 21:36:25.976700 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9c80
2011-08-03 21:36:25.976706 4a1da950 osd17 51 kick_pg_split_queue
2011-08-03 21:36:25.976718 4a1da950 -- 192.168.0.118:6803/4783 <== osd9 192.168.0.110:6801/4621 14 ==== PGnot v1 ==== 5244+0+0 (3032264584 0 0) 0x30aa540 con 0x2707640
2011-08-03 21:36:25.976723 4a1da950 osd17 51 _dispatch 0x30aa540 PGnot v1
2011-08-03 21:36:25.976727 4a1da950 osd17 51 handle_pg_notify from osd9
2011-08-03 21:36:25.976731 4a1da950 osd17 51 require_same_or_newer_map 45 (i am 51) 0x30aa540
2011-08-03 21:36:25.976745 4a1da950 osd17 51 pg[0.1f8( v 10'2 lc 0'0 (0'0,10'2] n=2 ec=2 les/c 26/11 39/39/39) [17,5] r=0 mlcod 0'0 !hml active m=2] handle_notify 0.1f8( v 10'2 (0'0,10'2] n=2 ec=2 les/c 10/11 39/39/39) from osd9
2011-08-03 21:36:25.976757 4a1da950 osd17 51 pg[0.1f8( v 10'2 lc 0'0 (0'0,10'2] n=2 ec=2 les/c 26/11 39/39/39) [17,5] r=0 mlcod 0'0 !hml active m=2] state<Started/Primary/Active>: Active: got notify from 9, already have info from that osd, ignoring
2011-08-03 21:36:25.976762 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2adec30/0x2ad9b68
2011-08-03 21:36:25.976766 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8208 0x2ca9dc0
2011-08-03 21:36:25.976770 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9dc0
2011-08-03 21:36:25.976786 4a1da950 osd17 51 pg[1.15e( empty n=0 ec=2 les/c 10/35 39/39/39) [17,9] r=0 mlcod 0'0 !hml peering] handle_notify 1.15e( empty n=0 ec=2 les/c 10/35 39/39/39) from osd9
2011-08-03 21:36:25.976797 4a1da950 osd17 51 pg[1.15e( empty n=0 ec=2 les/c 10/35 39/39/39) [17,9] r=0 mlcod 0'0 !hml peering] state<Started/Primary>: handle_pg_notify from osd9
2011-08-03 21:36:25.976810 4a1da950 osd17 51 pg[1.15e( empty n=0 ec=2 les/c 10/35 39/39/39) [17,9] r=0 mlcod 0'0 !hml peering] state<Started/Primary>: pg[1.15e( empty n=0 ec=2 les/c 10/35 39/39/39) [17,9] r=0 mlcod 0'0 !hml peering] got dup osd9 info 1.15e( empty n=0 ec=2 les/c 10/35 39/39/39), identical to ours
2011-08-03 21:36:25.976816 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2b2e0f0/0x2b5ab68
2011-08-03 21:36:25.976820 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8209 0x2ca9b40
2011-08-03 21:36:25.976824 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9b40
2011-08-03 21:36:25.976837 4a1da950 osd17 51 pg[0.15f( empty n=0 ec=2 les/c 10/10 39/39/39) [17,9] r=0 mlcod 0'0 !hml peering] handle_notify 0.15f( v 10'5 (10'3,10'5]+backlog n=5 ec=2 les/c 10/10 39/39/39) from osd9
2011-08-03 21:36:25.976848 4a1da950 osd17 51 pg[0.15f( empty n=0 ec=2 les/c 10/10 39/39/39) [17,9] r=0 mlcod 0'0 !hml peering] state<Started/Primary>: handle_pg_notify from osd9
2011-08-03 21:36:25.976862 4a1da950 osd17 51 pg[0.15f( empty n=0 ec=2 les/c 10/10 39/39/39) [17,9] r=0 mlcod 0'0 !hml peering] state<Started/Primary>: pg[0.15f( empty n=0 ec=2 les/c 10/10 39/39/39) [17,9] r=0 mlcod 0'0 !hml peering] got dup osd9 info 0.15f( v 10'5 (10'3,10'5]+backlog n=5 ec=2 les/c 10/10 39/39/39), identical to ours
2011-08-03 21:36:25.976867 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2b68f00/0x2b5bb68
2011-08-03 21:36:25.976871 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8210 0x2ca9aa0
2011-08-03 21:36:25.976875 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9aa0
2011-08-03 21:36:25.976888 4a1da950 osd17 51 pg[2.15d( empty n=0 ec=2 les/c 10/35 39/39/39) [17,9] r=0 mlcod 0'0 !hml peering] handle_notify 2.15d( empty n=0 ec=2 les/c 10/35 39/39/39) from osd9
2011-08-03 21:36:25.976899 4a1da950 osd17 51 pg[2.15d( empty n=0 ec=2 les/c 10/35 39/39/39) [17,9] r=0 mlcod 0'0 !hml peering] state<Started/Primary>: handle_pg_notify from osd9
2011-08-03 21:36:25.976912 4a1da950 osd17 51 pg[2.15d( empty n=0 ec=2 les/c 10/35 39/39/39) [17,9] r=0 mlcod 0'0 !hml peering] state<Started/Primary>: pg[2.15d( empty n=0 ec=2 les/c 10/35 39/39/39) [17,9] r=0 mlcod 0'0 !hml peering] got dup osd9 info 2.15d( empty n=0 ec=2 les/c 10/35 39/39/39), identical to ours
2011-08-03 21:36:25.976923 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2b68d20/0x2b60b68
2011-08-03 21:36:25.976928 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8211 0x2ca9280
2011-08-03 21:36:25.976932 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9280
2011-08-03 21:36:25.976947 4a1da950 osd17 51 pg[0.95( v 10'3 lc 0'0 (10'1,10'3]+backlog n=3 ec=2 les/c 10/10 39/39/39) [17,13] r=0 mlcod 0'0 !hml active m=3] handle_notify 0.95( v 10'3 (10'1,10'3] n=3 ec=2 les/c 10/10 39/39/39) from osd9
2011-08-03 21:36:25.976959 4a1da950 osd17 51 pg[0.95( v 10'3 lc 0'0 (10'1,10'3]+backlog n=3 ec=2 les/c 10/10 39/39/39) [17,13] r=0 mlcod 0'0 !hml active m=3] state<Started/Primary/Active>: Active: got notify from 9, already have info from that osd, ignoring
2011-08-03 21:36:25.976965 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2b68b40/0x2b64b68
2011-08-03 21:36:25.976969 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8212 0x2ca9be0
2011-08-03 21:36:25.976973 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9be0
2011-08-03 21:36:25.976985 4a1da950 osd17 51 pg[2.88( empty n=0 ec=2 les/c 21/51 39/39/39) [17,9] r=0 mlcod 0'0 !hml active+clean] handle_notify 2.88( empty n=0 ec=2 les/c 21/35 39/39/39) from osd9
2011-08-03 21:36:25.976996 4a1da950 osd17 51 pg[2.88( empty n=0 ec=2 les/c 21/51 39/39/39) [17,9] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 9, already have info from that osd, ignoring
2011-08-03 21:36:25.977001 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2b685a0/0x2b6cb68
2011-08-03 21:36:25.977006 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8213 0x2ca9640
2011-08-03 21:36:25.977009 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9640
2011-08-03 21:36:25.977021 4a1da950 osd17 51 pg[1.89( empty n=0 ec=2 les/c 21/51 39/39/39) [17,9] r=0 mlcod 0'0 !hml active+clean] handle_notify 1.89( empty n=0 ec=2 les/c 21/35 39/39/39) from osd9
2011-08-03 21:36:25.977032 4a1da950 osd17 51 pg[1.89( empty n=0 ec=2 les/c 21/51 39/39/39) [17,9] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 9, already have info from that osd, ignoring
2011-08-03 21:36:25.977037 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2b681e0/0x2b6eb68
2011-08-03 21:36:25.977041 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8214 0x2ca96e0
2011-08-03 21:36:25.977045 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca96e0
2011-08-03 21:36:25.977059 4a1da950 osd17 51 pg[0.28( v 10'2 lc 0'0 (0'0,10'2] n=2 ec=2 les/c 28/11 39/39/39) [17,2] r=0 mlcod 0'0 !hml active m=2] handle_notify 0.28( v 10'2 (0'0,10'2] n=2 ec=2 les/c 10/10 39/39/39) from osd9
2011-08-03 21:36:25.977071 4a1da950 osd17 51 pg[0.28( v 10'2 lc 0'0 (0'0,10'2] n=2 ec=2 les/c 28/11 39/39/39) [17,2] r=0 mlcod 0'0 !hml active m=2] state<Started/Primary/Active>: Active: got notify from 9, already have info from that osd, ignoring
2011-08-03 21:36:25.977076 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2b68000/0x2b72b68
2011-08-03 21:36:25.977081 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8215 0x2ca9a00
2011-08-03 21:36:25.977084 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9a00
2011-08-03 21:36:25.977099 4a1da950 osd17 51 pg[0.3fb( v 10'4 lc 0'0 (10'2,10'4]+backlog n=4 ec=3 les/c 7/11 39/39/39) [17,16] r=0 mlcod 0'0 !hml active m=4] handle_notify 0.3fb( v 10'4 (10'2,10'4] n=4 ec=3 les/c 7/11 39/39/39) from osd9
2011-08-03 21:36:25.977110 4a1da950 osd17 51 pg[0.3fb( v 10'4 lc 0'0 (10'2,10'4]+backlog n=4 ec=3 les/c 7/11 39/39/39) [17,16] r=0 mlcod 0'0 !hml active m=4] state<Started/Primary/Active>: Active: got notify from 9, already have info from that osd, ignoring
2011-08-03 21:36:25.977120 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2b7ca50/0x2b7bb68
2011-08-03 21:36:25.977125 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8216 0x2ca9960
2011-08-03 21:36:25.977129 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9960
2011-08-03 21:36:25.977144 4a1da950 osd17 51 pg[0.3ef( v 10'6 lc 0'0 (10'4,10'6]+backlog n=6 ec=2 les/c 10/11 39/39/39) [17,5] r=0 mlcod 0'0 !hml active m=6] handle_notify 0.3ef( v 10'6 (10'4,10'6] n=6 ec=2 les/c 10/11 39/39/39) from osd9
2011-08-03 21:36:25.977156 4a1da950 osd17 51 pg[0.3ef( v 10'6 lc 0'0 (10'4,10'6]+backlog n=6 ec=2 les/c 10/11 39/39/39) [17,5] r=0 mlcod 0'0 !hml active m=6] state<Started/Primary/Active>: Active: got notify from 9, already have info from that osd, ignoring
2011-08-03 21:36:25.977162 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2b0a870/0x2b0cb68
2011-08-03 21:36:25.977166 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8217 0x2ca9820
2011-08-03 21:36:25.977170 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9820
2011-08-03 21:36:25.977181 4a1da950 osd17 51 pg[1.3d0( v 10'1 lc 0'0 (0'0,10'1] n=1 ec=2 les/c 38/11 49/49/39) [17,10] r=0 mlcod 0'0 !hml peering m=1 u=1] get_or_create_pg acting changed in 49 (msg from 45)
2011-08-03 21:36:25.977190 4a1da950 osd17 51 pg[0.3d1( empty n=0 ec=2 les/c 10/11 49/49/39) [17,10] r=0 mlcod 0'0 !hml peering] get_or_create_pg acting changed in 49 (msg from 45)
2011-08-03 21:36:25.977201 4a1da950 osd17 51 pg[1.3b0( empty n=0 ec=2 les/c 21/51 39/39/39) [17,9] r=0 mlcod 0'0 !hml active+clean] handle_notify 1.3b0( empty n=0 ec=2 les/c 21/35 39/39/39) from osd9
2011-08-03 21:36:25.977211 4a1da950 osd17 51 pg[1.3b0( empty n=0 ec=2 les/c 21/51 39/39/39) [17,9] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 9, already have info from that osd, ignoring
2011-08-03 21:36:25.977216 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2b7c000/0x2b88b68
2011-08-03 21:36:25.977221 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8218 0x2ca98c0
2011-08-03 21:36:25.977224 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca98c0
2011-08-03 21:36:25.977236 4a1da950 osd17 51 pg[2.3af( empty n=0 ec=2 les/c 24/51 39/39/39) [17,9] r=0 mlcod 0'0 !hml active+clean] handle_notify 2.3af( empty n=0 ec=2 les/c 24/35 39/39/39) from osd9
2011-08-03 21:36:25.977247 4a1da950 osd17 51 pg[2.3af( empty n=0 ec=2 les/c 24/51 39/39/39) [17,9] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 9, already have info from that osd, ignoring
2011-08-03 21:36:25.977253 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2b93c30/0x2b8ab68
2011-08-03 21:36:25.977257 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8219 0x2ca91e0
2011-08-03 21:36:25.977261 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca91e0
2011-08-03 21:36:25.977271 4a1da950 osd17 51 pg[0.354( empty n=0 ec=2 les/c 10/10 50/50/39) [17,11] r=0 mlcod 0'0 !hml peering] get_or_create_pg acting changed in 50 (msg from 45)
2011-08-03 21:36:25.977282 4a1da950 osd17 51 pg[1.288( empty n=0 ec=2 les/c 28/51 39/39/39) [17,9] r=0 mlcod 0'0 !hml active+clean] handle_notify 1.288( empty n=0 ec=2 les/c 28/35 39/39/39) from osd9
2011-08-03 21:36:25.977291 4a1da950 osd17 51 pg[1.288( empty n=0 ec=2 les/c 28/51 39/39/39) [17,9] r=0 mlcod 0'0 !hml active+clean] state<Started/Primary/Active>: Active: got notify from 9, already have info from that osd, ignoring
2011-08-03 21:36:25.977296 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2ba7d20/0x2ba5b68
2011-08-03 21:36:25.977301 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8220 0x2ca93c0
2011-08-03 21:36:25.977304 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca93c0
2011-08-03 21:36:25.977316 4a1da950 osd17 51 pg[2.287( empty n=0 ec=2 les/c 10/35 39/39/39) [17,9] r=0 mlcod 0'0 !hml peering] handle_notify 2.287( empty n=0 ec=2 les/c 10/35 39/39/39) from osd9
2011-08-03 21:36:25.977332 4a1da950 osd17 51 pg[2.287( empty n=0 ec=2 les/c 10/35 39/39/39) [17,9] r=0 mlcod 0'0 !hml peering] state<Started/Primary>: handle_pg_notify from osd9
2011-08-03 21:36:25.977345 4a1da950 osd17 51 pg[2.287( empty n=0 ec=2 les/c 10/35 39/39/39) [17,9] r=0 mlcod 0'0 !hml peering] state<Started/Primary>: pg[2.287( empty n=0 ec=2 les/c 10/35 39/39/39) [17,9] r=0 mlcod 0'0 !hml peering] got dup osd9 info 2.287( empty n=0 ec=2 les/c 10/35 39/39/39), identical to ours
2011-08-03 21:36:25.977352 4a1da950 filestore(/data/osd17) queue_transactions existing osr 0x2ba7b40/0x2ba6b68
2011-08-03 21:36:25.977356 4a1da950 filestore(/data/osd17) queue_transactions (trailing journal) 8221 0x2ca9320
2011-08-03 21:36:25.977360 4a1da950 filestore(/data/osd17) _do_transaction on 0x2ca9320
2011-08-03 21:36:25.977367 4a1da950 osd17 51 kick_pg_split_queue
2011-08-03 21:36:25.977379 4a1da950 -- 192.168.0.118:6803/4783 <== osd5 192.168.0.106:6801/4590 24 ==== pg_log(0.3f e43) v1 ==== 529+0+0 (492439621 0 0) 0x2f9c800 con 0x26c9780
2011-08-03 21:36:25.977384 4a1da950 osd17 51 _dispatch 0x2f9c800 pg_log(0.3f e43) v1
2011-08-03 21:36:25.977389 4a1da950 osd17 51 handle_pg_log pg_log(0.3f e43) v1 from osd5
2011-08-03 21:36:25.977393 4a1da950 osd17 51 require_same_or_newer_map 43 (i am 51) 0x2f9c800
2011-08-03 21:36:25.977406 4a1da950 osd17 51 pg[0.3f( v 10'2 lc 0'0 (0'0,10'2] n=2 ec=2 les/c 10/10 39/39/39) [17,6] r=0 mlcod 0'0 !hml peering m=2 u=2] handle_log pg_log(0.3f e43) v1 from osd5
2011-08-03 21:36:25.977421 4a1da950 osd17 51 pg[0.3f( v 10'2 lc 0'0 (0'0,10'2] n=2 ec=2 les/c 10/10 39/39/39) [17,6] r=0 mlcod 0'0 !hml peering m=2 u=2] exit Started/Primary/Peering/GetInfo 2.045663 3 0.000233
2011-08-03 21:36:25.977433 4a1da950 osd17 51 pg[0.3f( v 10'2 lc 0'0 (0'0,10'2] n=2 ec=2 les/c 10/10 39/39/39) [17,6] r=0 mlcod 0'0 !hml peering m=2 u=2] state<Started/Primary/Peering>: Leaving Peering
2011-08-03 21:36:25.977444 4a1da950 osd17 51 pg[0.3f( v 10'2 lc 0'0 (0'0,10'2] n=2 ec=2 les/c 10/10 39/39/39) [17,6] r=0 mlcod 0'0 !hml peering m=2 u=2] exit Started/Primary/Peering 2.045697 0 0.000000
2011-08-03 21:36:25.977457 4a1da950 osd17 51 pg[0.3f( v 10'2 lc 0'0 (0'0,10'2] n=2 ec=2 les/c 10/10 39/39/39) [17,6] r=0 mlcod 0'0 !hml peering m=2 u=2] exit Started/Primary 2.045721 0 0.000000
2011-08-03 21:36:25.977469 4a1da950 osd17 51 pg[0.3f( v 10'2 lc 0'0 (0'0,10'2] n=2 ec=2 les/c 10/10 39/39/39) [17,6] r=0 mlcod 0'0 !hml peering m=2 u=2] exit Started 2.045774 0 0.000000
2011-08-03 21:36:25.977480 4a1da950 osd17 51 pg[0.3f( v 10'2 lc 0'0 (0'0,10'2] n=2 ec=2 les/c 10/10 39/39/39) [17,6] r=0 mlcod 0'0 !hml peering m=2 u=2] enter Crashed
osd/PG.cc: In function 'PG::RecoveryState::Crashed::Crashed(boost::statechart::state<PG::RecoveryState::Crashed, PG::RecoveryState::RecoveryMachine, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, has_no_history>::my_context)', in thread '0x4a1da950'
osd/PG.cc: 3882: FAILED assert(0 == "we got a bad state machine event")

 1: (PG::RecoveryState::Crashed::Crashed(boost::statechart::state<PG::RecoveryState::Crashed, PG::RecoveryState::RecoveryMachine, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::my_context)+0xb6) [0x56d296]
 2: (boost::statechart::detail::inner_constructor<boost::mpl::l_item<mpl_::long_<1l>, PG::RecoveryState::Crashed, boost::mpl::l_end>, boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator> >::construct(boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>* const&, boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>&)+0x2d) [0x5adcdd]
 3: (boost::statechart::simple_state<PG::RecoveryState::Started, PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Start, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0xc9) [0x5aec79]
 4: (boost::statechart::simple_state<PG::RecoveryState::Primary, PG::RecoveryState::Started, PG::RecoveryState::Peering, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0x101) [0x5af461]
 5: (boost::statechart::simple_state<PG::RecoveryState::Peering, PG::RecoveryState::Primary, PG::RecoveryState::GetInfo, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0x95) [0x5b2685]
 6: (boost::statechart::simple_state<PG::RecoveryState::GetInfo, PG::RecoveryState::Peering, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0x95) [0x5b4005]
 7: (boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>::process_event(boost::statechart::event_base const&)+0x4b) [0x5af64b]
 8: (PG::RecoveryState::handle_log(int, MOSDPGLog*, PG::RecoveryCtx*)+0x142) [0x57d592]
 9: (OSD::handle_pg_log(MOSDPGLog*)+0x34a) [0x52d9ea]
 10: (OSD::_dispatch(Message*)+0x515) [0x52e4d5]
 11: (OSD::ms_dispatch(Message*)+0xd9) [0x52eef9]
 12: (SimpleMessenger::dispatch_entry()+0x907) [0x62a577]
 13: (SimpleMessenger::DispatchThread::entry()+0x1f) [0x49af1f]
 14: /lib/libpthread.so.0 [0x7fda38d29fc7]
 15: (clone()+0x6d) [0x7fda3799a64d]

 1: (PG::RecoveryState::Crashed::Crashed(boost::statechart::state<PG::RecoveryState::Crashed, PG::RecoveryState::RecoveryMachine, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::my_context)+0xb6) [0x56d296]
 2: (boost::statechart::detail::inner_constructor<boost::mpl::l_item<mpl_::long_<1l>, PG::RecoveryState::Crashed, boost::mpl::l_end>, boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator> >::construct(boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>* const&, boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>&)+0x2d) [0x5adcdd]
 3: (boost::statechart::simple_state<PG::RecoveryState::Started, PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Start, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0xc9) [0x5aec79]
 4: (boost::statechart::simple_state<PG::RecoveryState::Primary, PG::RecoveryState::Started, PG::RecoveryState::Peering, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0x101) [0x5af461]
 5: (boost::statechart::simple_state<PG::RecoveryState::Peering, PG::RecoveryState::Primary, PG::RecoveryState::GetInfo, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0x95) [0x5b2685]
 6: (boost::statechart::simple_state<PG::RecoveryState::GetInfo, PG::RecoveryState::Peering, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0x95) [0x5b4005]
 7: (boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>::process_event(boost::statechart::event_base const&)+0x4b) [0x5af64b]
 8: (PG::RecoveryState::handle_log(int, MOSDPGLog*, PG::RecoveryCtx*)+0x142) [0x57d592]
 9: (OSD::handle_pg_log(MOSDPGLog*)+0x34a) [0x52d9ea]
 10: (OSD::_dispatch(Message*)+0x515) [0x52e4d5]
 11: (OSD::ms_dispatch(Message*)+0xd9) [0x52eef9]
 12: (SimpleMessenger::dispatch_entry()+0x907) [0x62a577]
 13: (SimpleMessenger::DispatchThread::entry()+0x1f) [0x49af1f]
 14: /lib/libpthread.so.0 [0x7fda38d29fc7]
 15: (clone()+0x6d) [0x7fda3799a64d]
*** Caught signal (Aborted) **
 in thread 0x4a1da950
 1: /bsd/bin/cosd [0x6473a8]
 2: /lib/libpthread.so.0 [0x7fda38d31a80]
 3: (gsignal()+0x35) [0x7fda378fced5]
 4: (abort()+0x183) [0x7fda378fe3f3]
 5: (__gnu_cxx::__verbose_terminate_handler()+0x115) [0x7fda38184dc5]
 6: /usr/lib/libstdc++.so.6 [0x7fda38183166]
 7: /usr/lib/libstdc++.so.6 [0x7fda38183193]
 8: /usr/lib/libstdc++.so.6 [0x7fda3818328e]
 9: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x37d) [0x612a4d]
 10: (PG::RecoveryState::Crashed::Crashed(boost::statechart::state<PG::RecoveryState::Crashed, PG::RecoveryState::RecoveryMachine, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::my_context)+0xb6) [0x56d296]
 11: (boost::statechart::detail::inner_constructor<boost::mpl::l_item<mpl_::long_<1l>, PG::RecoveryState::Crashed, boost::mpl::l_end>, boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator> >::construct(boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>* const&, boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>&)+0x2d) [0x5adcdd]
 12: (boost::statechart::simple_state<PG::RecoveryState::Started, PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Start, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0xc9) [0x5aec79]
 13: (boost::statechart::simple_state<PG::RecoveryState::Primary, PG::RecoveryState::Started, PG::RecoveryState::Peering, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0x101) [0x5af461]
 14: (boost::statechart::simple_state<PG::RecoveryState::Peering, PG::RecoveryState::Primary, PG::RecoveryState::GetInfo, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0x95) [0x5b2685]
 15: (boost::statechart::simple_state<PG::RecoveryState::GetInfo, PG::RecoveryState::Peering, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0x95) [0x5b4005]
 16: (boost::statechart::state_machine<PG::RecoveryState::RecoveryMachine, PG::RecoveryState::Initial, std::allocator<void>, boost::statechart::null_exception_translator>::process_event(boost::statechart::event_base const&)+0x4b) [0x5af64b]
 17: (PG::RecoveryState::handle_log(int, MOSDPGLog*, PG::RecoveryCtx*)+0x142) [0x57d592]
 18: (OSD::handle_pg_log(MOSDPGLog*)+0x34a) [0x52d9ea]
 19: (OSD::_dispatch(Message*)+0x515) [0x52e4d5]
 20: (OSD::ms_dispatch(Message*)+0xd9) [0x52eef9]
 21: (SimpleMessenger::dispatch_entry()+0x907) [0x62a577]
 22: (SimpleMessenger::DispatchThread::entry()+0x1f) [0x49af1f]
 23: /lib/libpthread.so.0 [0x7fda38d29fc7]
 24: (clone()+0x6d) [0x7fda3799a64d]

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2011-08-04 13:35 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-07-11 14:26 boost recoverystate handle_log fault huang jun
2011-07-11 19:38 ` Samuel Just
2011-08-04 13:35   ` huang jun

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.