* change strip_cache_size freeze the whole raid
@ 2007-01-22 11:02 kyle
2007-01-22 12:18 ` Justin Piszcz
2007-01-22 20:23 ` Neil Brown
0 siblings, 2 replies; 23+ messages in thread
From: kyle @ 2007-01-22 11:02 UTC (permalink / raw)
To: linux-raid
Hi,
Yesterday I tried to increase the value of strip_cache_size to see if I can
get better performance or not. I increase the value from 2048 to something
like 16384. After I did that, the raid5 freeze. Any proccess read / write to
it stucked at D state. I tried to change it back to 2048, read
strip_cache_active, cat /proc/mdstat, mdadm stop, etc. All didn't return
back. I even cannot shutdown the machine. Finally I need to press the reset
button in order to get back my control.
Kernel is 2.6.17.8 x86-64, running at AMD Athlon3000+, 2GB Ram, 8 x Seagate
8200.10 250GB HDD, nvidia chipset.
cat /proc/mdstat (after reboot):
Personalities : [raid1] [raid5] [raid4]
md1 : active raid1 hdc2[1] hda2[0]
6144768 blocks [2/2] [UU]
md2 : active raid5 sdf1[7] sde1[6] sdd1[5] sdc1[4] sdb1[3] sda1[2] hdc4[1]
hda4[0]
1664893440 blocks level 5, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]
md0 : active raid1 hdc1[1] hda1[0]
104320 blocks [2/2] [UU]
Kyle
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
2007-01-22 11:02 change strip_cache_size freeze the whole raid kyle
@ 2007-01-22 12:18 ` Justin Piszcz
2007-01-22 13:09 ` kyle
` (2 more replies)
2007-01-22 20:23 ` Neil Brown
1 sibling, 3 replies; 23+ messages in thread
From: Justin Piszcz @ 2007-01-22 12:18 UTC (permalink / raw)
To: kyle; +Cc: linux-raid, linux-kernel
On Mon, 22 Jan 2007, kyle wrote:
> Hi,
>
> Yesterday I tried to increase the value of strip_cache_size to see if I can
> get better performance or not. I increase the value from 2048 to something
> like 16384. After I did that, the raid5 freeze. Any proccess read / write to
> it stucked at D state. I tried to change it back to 2048, read
> strip_cache_active, cat /proc/mdstat, mdadm stop, etc. All didn't return back.
> I even cannot shutdown the machine. Finally I need to press the reset button
> in order to get back my control.
>
> Kernel is 2.6.17.8 x86-64, running at AMD Athlon3000+, 2GB Ram, 8 x Seagate
> 8200.10 250GB HDD, nvidia chipset.
>
> cat /proc/mdstat (after reboot):
> Personalities : [raid1] [raid5] [raid4]
> md1 : active raid1 hdc2[1] hda2[0]
> 6144768 blocks [2/2] [UU]
>
> md2 : active raid5 sdf1[7] sde1[6] sdd1[5] sdc1[4] sdb1[3] sda1[2] hdc4[1]
> hda4[0]
> 1664893440 blocks level 5, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]
>
> md0 : active raid1 hdc1[1] hda1[0]
> 104320 blocks [2/2] [UU]
>
> Kyle
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
Yes, I noticed this bug too, if you change it too many times or change it
at the 'wrong' time, it hangs up when you echo numbr >
/proc/stripe_cache_size.
Basically don't run it more than once and don't run it at the 'wrong' time
and it works. Not sure where the bug lies, but yeah I've seen that on 3
different machines!
Justin.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
2007-01-22 12:18 ` Justin Piszcz
@ 2007-01-22 13:09 ` kyle
2007-01-22 14:57 ` Steve Cousins
2007-01-22 16:10 ` Liang Yang
2 siblings, 0 replies; 23+ messages in thread
From: kyle @ 2007-01-22 13:09 UTC (permalink / raw)
To: Justin Piszcz; +Cc: linux-raid, linux-kernel
>
> On Mon, 22 Jan 2007, kyle wrote:
>
>> Hi,
>>
>> Yesterday I tried to increase the value of strip_cache_size to see if I
>> can
>> get better performance or not. I increase the value from 2048 to
>> something
>> like 16384. After I did that, the raid5 freeze. Any proccess read / write
>> to
>> it stucked at D state. I tried to change it back to 2048, read
>> strip_cache_active, cat /proc/mdstat, mdadm stop, etc. All didn't return
>> back.
>> I even cannot shutdown the machine. Finally I need to press the reset
>> button
>> in order to get back my control.
> Yes, I noticed this bug too, if you change it too many times or change it
> at the 'wrong' time, it hangs up when you echo numbr >
> /proc/stripe_cache_size.
>
> Basically don't run it more than once and don't run it at the 'wrong' time
> and it works. Not sure where the bug lies, but yeah I've seen that on 3
> different machines!
>
> Justin.
>
>
I just change it once, then it freeze. It's hard to get the 'right time'
Actually I tried it several times before. As I remember there was once it
freezed for around 1 or 2 minutes , then back to normal operation. This is
the first time it completely freezed and I waited after around 10 minutes it
still didn't wake up.
Kyle
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
@ 2007-01-22 13:09 ` kyle
0 siblings, 0 replies; 23+ messages in thread
From: kyle @ 2007-01-22 13:09 UTC (permalink / raw)
To: Justin Piszcz; +Cc: linux-raid, linux-kernel
>
> On Mon, 22 Jan 2007, kyle wrote:
>
>> Hi,
>>
>> Yesterday I tried to increase the value of strip_cache_size to see if I
>> can
>> get better performance or not. I increase the value from 2048 to
>> something
>> like 16384. After I did that, the raid5 freeze. Any proccess read / write
>> to
>> it stucked at D state. I tried to change it back to 2048, read
>> strip_cache_active, cat /proc/mdstat, mdadm stop, etc. All didn't return
>> back.
>> I even cannot shutdown the machine. Finally I need to press the reset
>> button
>> in order to get back my control.
> Yes, I noticed this bug too, if you change it too many times or change it
> at the 'wrong' time, it hangs up when you echo numbr >
> /proc/stripe_cache_size.
>
> Basically don't run it more than once and don't run it at the 'wrong' time
> and it works. Not sure where the bug lies, but yeah I've seen that on 3
> different machines!
>
> Justin.
>
>
I just change it once, then it freeze. It's hard to get the 'right time'
Actually I tried it several times before. As I remember there was once it
freezed for around 1 or 2 minutes , then back to normal operation. This is
the first time it completely freezed and I waited after around 10 minutes it
still didn't wake up.
Kyle
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
2007-01-22 13:09 ` kyle
(?)
@ 2007-01-22 14:56 ` Justin Piszcz
2007-01-22 15:18 ` kyle
-1 siblings, 1 reply; 23+ messages in thread
From: Justin Piszcz @ 2007-01-22 14:56 UTC (permalink / raw)
To: kyle; +Cc: linux-raid, linux-kernel
On Mon, 22 Jan 2007, kyle wrote:
> >
> > On Mon, 22 Jan 2007, kyle wrote:
> >
> > > Hi,
> > >
> > > Yesterday I tried to increase the value of strip_cache_size to see if I
> > > can
> > > get better performance or not. I increase the value from 2048 to something
> > > like 16384. After I did that, the raid5 freeze. Any proccess read / write
> > > to
> > > it stucked at D state. I tried to change it back to 2048, read
> > > strip_cache_active, cat /proc/mdstat, mdadm stop, etc. All didn't return
> > > back.
> > > I even cannot shutdown the machine. Finally I need to press the reset
> > > button
> > > in order to get back my control.
>
> > Yes, I noticed this bug too, if you change it too many times or change it
> > at the 'wrong' time, it hangs up when you echo numbr >
> > /proc/stripe_cache_size.
> >
> > Basically don't run it more than once and don't run it at the 'wrong' time
> > and it works. Not sure where the bug lies, but yeah I've seen that on 3
> > different machines!
> >
> > Justin.
> >
> >
>
> I just change it once, then it freeze. It's hard to get the 'right time'
>
> Actually I tried it several times before. As I remember there was once it
> freezed for around 1 or 2 minutes , then back to normal operation. This is the
> first time it completely freezed and I waited after around 10 minutes it still
> didn't wake up.
>
> Kyle
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
What kernel version are you using? It normally works the first time for
me, I put it in my startup scripts, as one of the last items. However, if
I change it a few times, it will hang and there is no way to reboot except
via SYSRQ or pressing the reboot button on the machine.
This seems to be true of 2.6.19.1 and 2.6.19.2, I did not try under
2.6.20-rc5 because I am tired of hanging my machine :)
Justin.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
2007-01-22 12:18 ` Justin Piszcz
2007-01-22 13:09 ` kyle
@ 2007-01-22 14:57 ` Steve Cousins
2007-01-22 15:01 ` Justin Piszcz
` (2 more replies)
2007-01-22 16:10 ` Liang Yang
2 siblings, 3 replies; 23+ messages in thread
From: Steve Cousins @ 2007-01-22 14:57 UTC (permalink / raw)
To: Justin Piszcz; +Cc: kyle, linux-raid, linux-kernel
Justin Piszcz wrote:
> Yes, I noticed this bug too, if you change it too many times or change it
> at the 'wrong' time, it hangs up when you echo numbr >
> /proc/stripe_cache_size.
>
> Basically don't run it more than once and don't run it at the 'wrong' time
> and it works. Not sure where the bug lies, but yeah I've seen that on 3
> different machines!
Can you tell us when the "right" time is or maybe what the "wrong" time
is? Also, is this kernel specific? Does it (increasing
stripe_cache_size) work with RAID6 too?
Thanks,
Steve
--
______________________________________________________________________
Steve Cousins, Ocean Modeling Group Email: cousins@umit.maine.edu
Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu
Univ. of Maine, Orono, ME 04469 Phone: (207) 581-4302
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
2007-01-22 14:57 ` Steve Cousins
@ 2007-01-22 15:01 ` Justin Piszcz
2007-01-23 14:22 ` kyle
2007-01-22 15:10 ` Justin Piszcz
2007-01-22 15:13 ` kyle
2 siblings, 1 reply; 23+ messages in thread
From: Justin Piszcz @ 2007-01-22 15:01 UTC (permalink / raw)
To: Steve Cousins; +Cc: kyle, linux-raid, linux-kernel
On Mon, 22 Jan 2007, Steve Cousins wrote:
>
>
> Justin Piszcz wrote:
> > Yes, I noticed this bug too, if you change it too many times or change it at
> > the 'wrong' time, it hangs up when you echo numbr > /proc/stripe_cache_size.
> >
> > Basically don't run it more than once and don't run it at the 'wrong' time
> > and it works. Not sure where the bug lies, but yeah I've seen that on 3
> > different machines!
>
> Can you tell us when the "right" time is or maybe what the "wrong" time is?
> Also, is this kernel specific? Does it (increasing stripe_cache_size) work
> with RAID6 too?
>
> Thanks,
>
> Steve
> --
> ______________________________________________________________________
> Steve Cousins, Ocean Modeling Group Email: cousins@umit.maine.edu
> Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu
> Univ. of Maine, Orono, ME 04469 Phone: (207) 581-4302
>
>
>
The wrong time (for me anyway) is when/or around the time in which kernel
is auto-detecting arrays/udev starts, when I put it there I get OOPSES all
over the screen and it gets really nasty. Basically the best time appears
to be right after the system has started up but I/O hasn't started hitting
the array yet. Tricky, I know.
Justin.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
2007-01-22 14:57 ` Steve Cousins
2007-01-22 15:01 ` Justin Piszcz
@ 2007-01-22 15:10 ` Justin Piszcz
2007-01-22 15:13 ` kyle
2 siblings, 0 replies; 23+ messages in thread
From: Justin Piszcz @ 2007-01-22 15:10 UTC (permalink / raw)
To: Steve Cousins; +Cc: kyle, linux-raid, linux-kernel
On Mon, 22 Jan 2007, Steve Cousins wrote:
>
>
> Justin Piszcz wrote:
> > Yes, I noticed this bug too, if you change it too many times or change it at
> > the 'wrong' time, it hangs up when you echo numbr > /proc/stripe_cache_size.
> >
> > Basically don't run it more than once and don't run it at the 'wrong' time
> > and it works. Not sure where the bug lies, but yeah I've seen that on 3
> > different machines!
>
> Can you tell us when the "right" time is or maybe what the "wrong" time is?
> Also, is this kernel specific? Does it (increasing stripe_cache_size) work
> with RAID6 too?
>
> Thanks,
>
> Steve
> --
> ______________________________________________________________________
> Steve Cousins, Ocean Modeling Group Email: cousins@umit.maine.edu
> Marine Sciences, 452 Aubert Hall http://rocky.umeoce.maine.edu
> Univ. of Maine, Orono, ME 04469 Phone: (207) 581-4302
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
Also, I have not tested the stripe_cache_size under RAID6, I am unsure.
Justin.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
2007-01-22 14:57 ` Steve Cousins
@ 2007-01-22 15:13 ` kyle
2007-01-22 15:10 ` Justin Piszcz
2007-01-22 15:13 ` kyle
2 siblings, 0 replies; 23+ messages in thread
From: kyle @ 2007-01-22 15:13 UTC (permalink / raw)
To: Steve Cousins, Justin Piszcz; +Cc: linux-raid, linux-kernel
> Justin Piszcz wrote:
>> Yes, I noticed this bug too, if you change it too many times or change it
>> at the 'wrong' time, it hangs up when you echo numbr >
>> /proc/stripe_cache_size.
>>
>> Basically don't run it more than once and don't run it at the 'wrong'
>> time and it works. Not sure where the bug lies, but yeah I've seen that
>> on 3 different machines!
>
> Can you tell us when the "right" time is or maybe what the "wrong" time
> is? Also, is this kernel specific? Does it (increasing
> stripe_cache_size) work with RAID6 too?
>
> Thanks,
>
> Steve
I think if your /sys/block/md_your_raid6/md/ have a file
"stripe_cache_size", then it should works with raid6 too.
Kyle
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
@ 2007-01-22 15:13 ` kyle
0 siblings, 0 replies; 23+ messages in thread
From: kyle @ 2007-01-22 15:13 UTC (permalink / raw)
To: Steve Cousins, Justin Piszcz; +Cc: linux-raid, linux-kernel
> Justin Piszcz wrote:
>> Yes, I noticed this bug too, if you change it too many times or change it
>> at the 'wrong' time, it hangs up when you echo numbr >
>> /proc/stripe_cache_size.
>>
>> Basically don't run it more than once and don't run it at the 'wrong'
>> time and it works. Not sure where the bug lies, but yeah I've seen that
>> on 3 different machines!
>
> Can you tell us when the "right" time is or maybe what the "wrong" time
> is? Also, is this kernel specific? Does it (increasing
> stripe_cache_size) work with RAID6 too?
>
> Thanks,
>
> Steve
I think if your /sys/block/md_your_raid6/md/ have a file
"stripe_cache_size", then it should works with raid6 too.
Kyle
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
2007-01-22 14:56 ` Justin Piszcz
@ 2007-01-22 15:18 ` kyle
0 siblings, 0 replies; 23+ messages in thread
From: kyle @ 2007-01-22 15:18 UTC (permalink / raw)
To: Justin Piszcz; +Cc: linux-raid, linux-kernel
>>
>> > Yes, I noticed this bug too, if you change it too many times or change
>> > it
>> > at the 'wrong' time, it hangs up when you echo numbr >
>> > /proc/stripe_cache_size.
>> >
>> > Basically don't run it more than once and don't run it at the 'wrong'
>> > time
>> > and it works. Not sure where the bug lies, but yeah I've seen that on
>> > 3
>> > different machines!
>> >
>> > Justin.
>> >
>> >
>>
>> I just change it once, then it freeze. It's hard to get the 'right time'
>>
>> Actually I tried it several times before. As I remember there was once it
>> freezed for around 1 or 2 minutes , then back to normal operation. This
>> is the
>> first time it completely freezed and I waited after around 10 minutes it
>> still
>> didn't wake up.
>>
>> Kyle
>>
>> -
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
> What kernel version are you using? It normally works the first time for
> me, I put it in my startup scripts, as one of the last items. However, if
> I change it a few times, it will hang and there is no way to reboot except
> via SYSRQ or pressing the reboot button on the machine.
>
> This seems to be true of 2.6.19.1 and 2.6.19.2, I did not try under
> 2.6.20-rc5 because I am tired of hanging my machine :)
>
> Justin.
>
It was 2.6.17.8. Now it's 2.6.7.13 but I won't touch it now! It's around
15km from me!
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
@ 2007-01-22 15:18 ` kyle
0 siblings, 0 replies; 23+ messages in thread
From: kyle @ 2007-01-22 15:18 UTC (permalink / raw)
To: Justin Piszcz; +Cc: linux-raid, linux-kernel
>>
>> > Yes, I noticed this bug too, if you change it too many times or change
>> > it
>> > at the 'wrong' time, it hangs up when you echo numbr >
>> > /proc/stripe_cache_size.
>> >
>> > Basically don't run it more than once and don't run it at the 'wrong'
>> > time
>> > and it works. Not sure where the bug lies, but yeah I've seen that on
>> > 3
>> > different machines!
>> >
>> > Justin.
>> >
>> >
>>
>> I just change it once, then it freeze. It's hard to get the 'right time'
>>
>> Actually I tried it several times before. As I remember there was once it
>> freezed for around 1 or 2 minutes , then back to normal operation. This
>> is the
>> first time it completely freezed and I waited after around 10 minutes it
>> still
>> didn't wake up.
>>
>> Kyle
>>
>> -
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
> What kernel version are you using? It normally works the first time for
> me, I put it in my startup scripts, as one of the last items. However, if
> I change it a few times, it will hang and there is no way to reboot except
> via SYSRQ or pressing the reboot button on the machine.
>
> This seems to be true of 2.6.19.1 and 2.6.19.2, I did not try under
> 2.6.20-rc5 because I am tired of hanging my machine :)
>
> Justin.
>
It was 2.6.17.8. Now it's 2.6.7.13 but I won't touch it now! It's around
15km from me!
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
2007-01-22 12:18 ` Justin Piszcz
@ 2007-01-22 16:10 ` Liang Yang
2007-01-22 14:57 ` Steve Cousins
2007-01-22 16:10 ` Liang Yang
2 siblings, 0 replies; 23+ messages in thread
From: Liang Yang @ 2007-01-22 16:10 UTC (permalink / raw)
To: Justin Piszcz, kyle; +Cc: linux-raid, linux-kernel
Do we need to consider the chunk size when we adjust the value of
Striped_Cache_Szie for the MD-RAID5 array?
Liang
----- Original Message -----
From: "Justin Piszcz" <jpiszcz@lucidpixels.com>
To: "kyle" <kylewong@southa.com>
Cc: <linux-raid@vger.kernel.org>; <linux-kernel@vger.kernel.org>
Sent: Monday, January 22, 2007 5:18 AM
Subject: Re: change strip_cache_size freeze the whole raid
>
>
> On Mon, 22 Jan 2007, kyle wrote:
>
>> Hi,
>>
>> Yesterday I tried to increase the value of strip_cache_size to see if I
>> can
>> get better performance or not. I increase the value from 2048 to
>> something
>> like 16384. After I did that, the raid5 freeze. Any proccess read / write
>> to
>> it stucked at D state. I tried to change it back to 2048, read
>> strip_cache_active, cat /proc/mdstat, mdadm stop, etc. All didn't return
>> back.
>> I even cannot shutdown the machine. Finally I need to press the reset
>> button
>> in order to get back my control.
>>
>> Kernel is 2.6.17.8 x86-64, running at AMD Athlon3000+, 2GB Ram, 8 x
>> Seagate
>> 8200.10 250GB HDD, nvidia chipset.
>>
>> cat /proc/mdstat (after reboot):
>> Personalities : [raid1] [raid5] [raid4]
>> md1 : active raid1 hdc2[1] hda2[0]
>> 6144768 blocks [2/2] [UU]
>>
>> md2 : active raid5 sdf1[7] sde1[6] sdd1[5] sdc1[4] sdb1[3] sda1[2]
>> hdc4[1]
>> hda4[0]
>> 1664893440 blocks level 5, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]
>>
>> md0 : active raid1 hdc1[1] hda1[0]
>> 104320 blocks [2/2] [UU]
>>
>> Kyle
>>
>> -
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
> Yes, I noticed this bug too, if you change it too many times or change it
> at the 'wrong' time, it hangs up when you echo numbr >
> /proc/stripe_cache_size.
>
> Basically don't run it more than once and don't run it at the 'wrong' time
> and it works. Not sure where the bug lies, but yeah I've seen that on 3
> different machines!
>
> Justin.
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
@ 2007-01-22 16:10 ` Liang Yang
0 siblings, 0 replies; 23+ messages in thread
From: Liang Yang @ 2007-01-22 16:10 UTC (permalink / raw)
To: Justin Piszcz, kyle; +Cc: linux-raid, linux-kernel
Do we need to consider the chunk size when we adjust the value of
Striped_Cache_Szie for the MD-RAID5 array?
Liang
----- Original Message -----
From: "Justin Piszcz" <jpiszcz@lucidpixels.com>
To: "kyle" <kylewong@southa.com>
Cc: <linux-raid@vger.kernel.org>; <linux-kernel@vger.kernel.org>
Sent: Monday, January 22, 2007 5:18 AM
Subject: Re: change strip_cache_size freeze the whole raid
>
>
> On Mon, 22 Jan 2007, kyle wrote:
>
>> Hi,
>>
>> Yesterday I tried to increase the value of strip_cache_size to see if I
>> can
>> get better performance or not. I increase the value from 2048 to
>> something
>> like 16384. After I did that, the raid5 freeze. Any proccess read / write
>> to
>> it stucked at D state. I tried to change it back to 2048, read
>> strip_cache_active, cat /proc/mdstat, mdadm stop, etc. All didn't return
>> back.
>> I even cannot shutdown the machine. Finally I need to press the reset
>> button
>> in order to get back my control.
>>
>> Kernel is 2.6.17.8 x86-64, running at AMD Athlon3000+, 2GB Ram, 8 x
>> Seagate
>> 8200.10 250GB HDD, nvidia chipset.
>>
>> cat /proc/mdstat (after reboot):
>> Personalities : [raid1] [raid5] [raid4]
>> md1 : active raid1 hdc2[1] hda2[0]
>> 6144768 blocks [2/2] [UU]
>>
>> md2 : active raid5 sdf1[7] sde1[6] sdd1[5] sdc1[4] sdb1[3] sda1[2]
>> hdc4[1]
>> hda4[0]
>> 1664893440 blocks level 5, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]
>>
>> md0 : active raid1 hdc1[1] hda1[0]
>> 104320 blocks [2/2] [UU]
>>
>> Kyle
>>
>> -
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
> Yes, I noticed this bug too, if you change it too many times or change it
> at the 'wrong' time, it hangs up when you echo numbr >
> /proc/stripe_cache_size.
>
> Basically don't run it more than once and don't run it at the 'wrong' time
> and it works. Not sure where the bug lies, but yeah I've seen that on 3
> different machines!
>
> Justin.
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
2007-01-22 11:02 change strip_cache_size freeze the whole raid kyle
2007-01-22 12:18 ` Justin Piszcz
@ 2007-01-22 20:23 ` Neil Brown
2007-01-22 22:47 ` Neil Brown
` (2 more replies)
1 sibling, 3 replies; 23+ messages in thread
From: Neil Brown @ 2007-01-22 20:23 UTC (permalink / raw)
To: kyle; +Cc: linux-raid
On Monday January 22, kylewong@southa.com wrote:
> Hi,
>
> Yesterday I tried to increase the value of strip_cache_size to see if I can
> get better performance or not. I increase the value from 2048 to something
> like 16384. After I did that, the raid5 freeze. Any proccess read / write to
> it stucked at D state. I tried to change it back to 2048, read
> strip_cache_active, cat /proc/mdstat, mdadm stop, etc. All didn't return
> back. I even cannot shutdown the machine. Finally I need to press the reset
> button in order to get back my control.
Thanks for reporting this.
alt-sysrq-T or "echo t > /proc/sysrq-trigger" can be really helpful to
diagnose this sort of problem (providing the system isn't so badly
stuck that the kernel logs don't get stored).
It is probably hitting a memory-allocation deadlock, though I cannot
see exactly where the deadlock would be. If you are able to reproduce
it and can get the kernel logs after 'alt-sysrq-T' I would really
appreciate it.
This patch will almost certainly fix the problem, though I would like
to completely understand it first....
NeilBrown
Signed-off-by: Neil Brown <neilb@suse.de>
### Diffstat output
./drivers/md/raid5.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff .prev/drivers/md/raid5.c ./drivers/md/raid5.c
--- .prev/drivers/md/raid5.c 2007-01-22 09:08:16.000000000 +1100
+++ ./drivers/md/raid5.c 2007-01-23 07:17:25.000000000 +1100
@@ -205,7 +205,7 @@ static int grow_buffers(struct stripe_he
for (i=0; i<num; i++) {
struct page *page;
- if (!(page = alloc_page(GFP_KERNEL))) {
+ if (!(page = alloc_page(GFP_IO))) {
return 1;
}
sh->dev[i].page = page;
@@ -321,7 +321,7 @@ static struct stripe_head *get_active_st
static int grow_one_stripe(raid5_conf_t *conf)
{
struct stripe_head *sh;
- sh = kmem_cache_alloc(conf->slab_cache, GFP_KERNEL);
+ sh = kmem_cache_alloc(conf->slab_cache, GFP_IO);
if (!sh)
return 0;
memset(sh, 0, sizeof(*sh) + (conf->raid_disks-1)*sizeof(struct r5dev));
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
2007-01-22 20:23 ` Neil Brown
@ 2007-01-22 22:47 ` Neil Brown
2007-01-23 10:57 ` Justin Piszcz
2007-01-24 23:24 ` Justin Piszcz
2 siblings, 0 replies; 23+ messages in thread
From: Neil Brown @ 2007-01-22 22:47 UTC (permalink / raw)
To: kyle, linux-raid
On Tuesday January 23, neilb@suse.de wrote:
>
> This patch will almost certainly fix the problem, though I would like
> to completely understand it first....
Of course, that patch didn't compile.... The "GFP_IO" should have been
"GFP_NOIO".
As below.
NeilBrown
--------------------------
Avoid possible malloc deadlock in raid5.
Due to reports of raid5 hanging when growing the stripe cache,
it is best to use GFP_IO for those allocation. We would rather
fail than deadlock.
Signed-off-by: Neil Brown <neilb@suse.de>
### Diffstat output
./drivers/md/raid5.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff .prev/drivers/md/raid5.c ./drivers/md/raid5.c
--- .prev/drivers/md/raid5.c 2007-01-23 09:44:22.000000000 +1100
+++ ./drivers/md/raid5.c 2007-01-23 09:44:43.000000000 +1100
@@ -205,7 +205,7 @@ static int grow_buffers(struct stripe_he
for (i=0; i<num; i++) {
struct page *page;
- if (!(page = alloc_page(GFP_KERNEL))) {
+ if (!(page = alloc_page(GFP_NOIO))) {
return 1;
}
sh->dev[i].page = page;
@@ -321,7 +321,7 @@ static struct stripe_head *get_active_st
static int grow_one_stripe(raid5_conf_t *conf)
{
struct stripe_head *sh;
- sh = kmem_cache_alloc(conf->slab_cache, GFP_KERNEL);
+ sh = kmem_cache_alloc(conf->slab_cache, GFP_NOIO);
if (!sh)
return 0;
memset(sh, 0, sizeof(*sh) + (conf->raid_disks-1)*sizeof(struct r5dev));
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
2007-01-22 20:23 ` Neil Brown
2007-01-22 22:47 ` Neil Brown
@ 2007-01-23 10:57 ` Justin Piszcz
2007-01-24 23:24 ` Justin Piszcz
2 siblings, 0 replies; 23+ messages in thread
From: Justin Piszcz @ 2007-01-23 10:57 UTC (permalink / raw)
To: Neil Brown; +Cc: kyle, linux-raid
I can try and do this later this week possibly.
Justin.
On Tue, 23 Jan 2007, Neil Brown wrote:
> On Monday January 22, kylewong@southa.com wrote:
> > Hi,
> >
> > Yesterday I tried to increase the value of strip_cache_size to see if I can
> > get better performance or not. I increase the value from 2048 to something
> > like 16384. After I did that, the raid5 freeze. Any proccess read / write to
> > it stucked at D state. I tried to change it back to 2048, read
> > strip_cache_active, cat /proc/mdstat, mdadm stop, etc. All didn't return
> > back. I even cannot shutdown the machine. Finally I need to press the reset
> > button in order to get back my control.
>
> Thanks for reporting this.
>
> alt-sysrq-T or "echo t > /proc/sysrq-trigger" can be really helpful to
> diagnose this sort of problem (providing the system isn't so badly
> stuck that the kernel logs don't get stored).
>
> It is probably hitting a memory-allocation deadlock, though I cannot
> see exactly where the deadlock would be. If you are able to reproduce
> it and can get the kernel logs after 'alt-sysrq-T' I would really
> appreciate it.
>
> This patch will almost certainly fix the problem, though I would like
> to completely understand it first....
>
> NeilBrown
>
>
>
> Signed-off-by: Neil Brown <neilb@suse.de>
>
> ### Diffstat output
> ./drivers/md/raid5.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff .prev/drivers/md/raid5.c ./drivers/md/raid5.c
> --- .prev/drivers/md/raid5.c 2007-01-22 09:08:16.000000000 +1100
> +++ ./drivers/md/raid5.c 2007-01-23 07:17:25.000000000 +1100
> @@ -205,7 +205,7 @@ static int grow_buffers(struct stripe_he
> for (i=0; i<num; i++) {
> struct page *page;
>
> - if (!(page = alloc_page(GFP_KERNEL))) {
> + if (!(page = alloc_page(GFP_IO))) {
> return 1;
> }
> sh->dev[i].page = page;
> @@ -321,7 +321,7 @@ static struct stripe_head *get_active_st
> static int grow_one_stripe(raid5_conf_t *conf)
> {
> struct stripe_head *sh;
> - sh = kmem_cache_alloc(conf->slab_cache, GFP_KERNEL);
> + sh = kmem_cache_alloc(conf->slab_cache, GFP_IO);
> if (!sh)
> return 0;
> memset(sh, 0, sizeof(*sh) + (conf->raid_disks-1)*sizeof(struct r5dev));
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
2007-01-22 15:01 ` Justin Piszcz
@ 2007-01-23 14:22 ` kyle
0 siblings, 0 replies; 23+ messages in thread
From: kyle @ 2007-01-23 14:22 UTC (permalink / raw)
To: Justin Piszcz, Steve Cousins; +Cc: linux-raid, linux-kernel
> I can try and do this later this week possibly.
> Justin.
>>
>> alt-sysrq-T or "echo t > /proc/sysrq-trigger" can be really helpful to
>> diagnose this sort of problem (providing the system isn't so badly
>> stuck that the kernel logs don't get stored).
>>
>> It is probably hitting a memory-allocation deadlock, though I cannot
>> see exactly where the deadlock would be. If you are able to reproduce
>> it and can get the kernel logs after 'alt-sysrq-T' I would really
>> appreciate it.
>> Justin,Maybe you can try freeze it once more and get the kernel logs
>> before try Neil's patch ...... :D~Kyle
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
@ 2007-01-23 14:22 ` kyle
0 siblings, 0 replies; 23+ messages in thread
From: kyle @ 2007-01-23 14:22 UTC (permalink / raw)
To: Justin Piszcz, Steve Cousins; +Cc: linux-raid, linux-kernel
> I can try and do this later this week possibly.
> Justin.
>>
>> alt-sysrq-T or "echo t > /proc/sysrq-trigger" can be really helpful to
>> diagnose this sort of problem (providing the system isn't so badly
>> stuck that the kernel logs don't get stored).
>>
>> It is probably hitting a memory-allocation deadlock, though I cannot
>> see exactly where the deadlock would be. If you are able to reproduce
>> it and can get the kernel logs after 'alt-sysrq-T' I would really
>> appreciate it.
>> Justin,Maybe you can try freeze it once more and get the kernel logs
>> before try Neil's patch ...... :D~Kyle
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
2007-01-22 20:23 ` Neil Brown
2007-01-22 22:47 ` Neil Brown
2007-01-23 10:57 ` Justin Piszcz
@ 2007-01-24 23:24 ` Justin Piszcz
2007-01-25 0:13 ` Neil Brown
2 siblings, 1 reply; 23+ messages in thread
From: Justin Piszcz @ 2007-01-24 23:24 UTC (permalink / raw)
To: Neil Brown; +Cc: kyle, linux-raid
[-- Attachment #1: Type: TEXT/PLAIN, Size: 2875 bytes --]
Here you go Neil:
p34:~# echo 512 > /sys/block/md3/md/stripe_cache_size
p34:~# echo 1024 > /sys/block/md3/md/stripe_cache_size
p34:~# echo 2048 > /sys/block/md3/md/stripe_cache_size
p34:~# echo 4096 > /sys/block/md3/md/stripe_cache_size
p34:~# echo 8192 > /sys/block/md3/md/stripe_cache_size
<...... FROZEN ........>
I ran echo t > /proc/sysrq-trigger and then copied the relevant parts of
kern.log and I am attaching them to this e-mail.
Please confirm this is what you needed.
Justin.
On Tue, 23 Jan 2007, Neil Brown wrote:
> On Monday January 22, kylewong@southa.com wrote:
> > Hi,
> >
> > Yesterday I tried to increase the value of strip_cache_size to see if I can
> > get better performance or not. I increase the value from 2048 to something
> > like 16384. After I did that, the raid5 freeze. Any proccess read / write to
> > it stucked at D state. I tried to change it back to 2048, read
> > strip_cache_active, cat /proc/mdstat, mdadm stop, etc. All didn't return
> > back. I even cannot shutdown the machine. Finally I need to press the reset
> > button in order to get back my control.
>
> Thanks for reporting this.
>
> alt-sysrq-T or "echo t > /proc/sysrq-trigger" can be really helpful to
> diagnose this sort of problem (providing the system isn't so badly
> stuck that the kernel logs don't get stored).
>
> It is probably hitting a memory-allocation deadlock, though I cannot
> see exactly where the deadlock would be. If you are able to reproduce
> it and can get the kernel logs after 'alt-sysrq-T' I would really
> appreciate it.
>
> This patch will almost certainly fix the problem, though I would like
> to completely understand it first....
>
> NeilBrown
>
>
>
> Signed-off-by: Neil Brown <neilb@suse.de>
>
> ### Diffstat output
> ./drivers/md/raid5.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff .prev/drivers/md/raid5.c ./drivers/md/raid5.c
> --- .prev/drivers/md/raid5.c 2007-01-22 09:08:16.000000000 +1100
> +++ ./drivers/md/raid5.c 2007-01-23 07:17:25.000000000 +1100
> @@ -205,7 +205,7 @@ static int grow_buffers(struct stripe_he
> for (i=0; i<num; i++) {
> struct page *page;
>
> - if (!(page = alloc_page(GFP_KERNEL))) {
> + if (!(page = alloc_page(GFP_IO))) {
> return 1;
> }
> sh->dev[i].page = page;
> @@ -321,7 +321,7 @@ static struct stripe_head *get_active_st
> static int grow_one_stripe(raid5_conf_t *conf)
> {
> struct stripe_head *sh;
> - sh = kmem_cache_alloc(conf->slab_cache, GFP_KERNEL);
> + sh = kmem_cache_alloc(conf->slab_cache, GFP_IO);
> if (!sh)
> return 0;
> memset(sh, 0, sizeof(*sh) + (conf->raid_disks-1)*sizeof(struct r5dev));
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
[-- Attachment #2: Type: TEXT/plain, Size: 65763 bytes --]
1 Jan 21 14:23:39 p34 kernel: [ 12.855165] #0: HDA Intel at 0x90720000 irq 22
2 Jan 21 14:23:39 p34 kernel: [ 12.855217] u32 classifier
3 Jan 21 14:23:39 p34 kernel: [ 12.855271] Actions configured
4 Jan 21 14:23:39 p34 kernel: [ 12.855318] nf_conntrack version 0.5.0 (8192 buckets, 65536 max)
5 Jan 21 14:23:39 p34 kernel: [ 12.855472] ip_tables: (C) 2000-2006 Netfilter Core Team
6 Jan 21 14:23:39 p34 kernel: [ 12.922421] TCP cubic registered
7 Jan 21 14:23:39 p34 kernel: [ 12.922471] NET: Registered protocol family 1
8 Jan 21 14:23:39 p34 kernel: [ 12.922524] NET: Registered protocol family 17
9 Jan 21 14:23:39 p34 kernel: [ 12.922612] Testing NMI watchdog ... OK.
10 Jan 21 14:23:39 p34 kernel: [ 12.932684] Starting balanced_irq
11 Jan 21 14:23:39 p34 kernel: [ 12.932733] Using IPI Shortcut mode
12 Jan 21 14:23:39 p34 kernel: [ 12.932916] md: Autodetecting RAID arrays.
13 Jan 21 14:23:39 p34 kernel: [ 12.932991] Time: tsc clocksource has been installed.
14 Jan 21 14:23:39 p34 kernel: [ 12.996335] md: invalid raid superblock magic on sdc1
15 Jan 21 14:23:39 p34 kernel: [ 12.996384] md: sdc1 has invalid sb, not importing!
16 Jan 21 14:23:39 p34 kernel: [ 13.013726] md: invalid raid superblock magic on sdd1
17 Jan 21 14:23:39 p34 kernel: [ 13.013774] md: sdd1 has invalid sb, not importing!
18 Jan 21 14:23:39 p34 kernel: [ 13.026536] md: invalid raid superblock magic on sde1
19 Jan 21 14:23:39 p34 kernel: [ 13.026585] md: sde1 has invalid sb, not importing!
20 Jan 21 14:23:39 p34 kernel: [ 13.045933] md: invalid raid superblock magic on sdf1
21 Jan 21 14:23:39 p34 kernel: [ 13.045985] md: sdf1 has invalid sb, not importing!
22 Jan 21 14:23:39 p34 kernel: [ 13.055061] md: invalid raid superblock magic on sdg1
23 Jan 21 14:23:39 p34 kernel: [ 13.055109] md: sdg1 has invalid sb, not importing!
24 Jan 21 14:23:39 p34 kernel: [ 13.074861] md: invalid raid superblock magic on sdh1
25 Jan 21 14:23:39 p34 kernel: [ 13.074909] md: sdh1 has invalid sb, not importing!
26 Jan 21 14:23:39 p34 kernel: [ 13.081479] md: invalid raid superblock magic on sdi1
27 Jan 21 14:23:39 p34 kernel: [ 13.081528] md: sdi1 has invalid sb, not importing!
28 Jan 21 14:23:39 p34 kernel: [ 13.095440] md: invalid raid superblock magic on sdj1
29 Jan 21 14:23:39 p34 kernel: [ 13.095488] md: sdj1 has invalid sb, not importing!
30 Jan 21 14:23:39 p34 kernel: [ 13.104266] md: invalid raid superblock magic on sdk1
31 Jan 21 14:23:39 p34 kernel: [ 13.104314] md: sdk1 has invalid sb, not importing!
32 Jan 21 14:23:39 p34 kernel: [ 13.104367] md: autorun ...
33 Jan 21 14:23:39 p34 kernel: [ 13.104413] md: considering sdb3 ...
34 Jan 21 14:23:39 p34 kernel: [ 13.104465] md: adding sdb3 ...
35 Jan 21 14:23:39 p34 kernel: [ 13.104514] md: sdb2 has different UUID to sdb3
36 Jan 21 14:23:39 p34 kernel: [ 13.104565] md: sdb1 has different UUID to sdb3
37 Jan 21 14:23:39 p34 kernel: [ 13.104618] md: adding sda3 ...
38 Jan 21 14:23:39 p34 kernel: [ 13.104677] md: sda2 has different UUID to sdb3
39 Jan 21 14:23:39 p34 kernel: [ 13.104725] md: sda1 has different UUID to sdb3
40 Jan 21 14:23:39 p34 kernel: [ 13.104802] md: created md2
41 Jan 21 14:23:39 p34 kernel: [ 13.104846] md: bind<sda3>
42 Jan 21 14:23:39 p34 kernel: [ 13.104897] md: bind<sdb3>
43 Jan 21 14:23:39 p34 kernel: [ 13.104945] md: running: <sdb3><sda3>
44 Jan 21 14:23:39 p34 kernel: [ 13.105100] raid1: raid set md2 active with 2 out of 2 mirrors
45 Jan 21 14:23:39 p34 kernel: [ 13.105182] md: considering sdb2 ...
46 Jan 21 14:23:39 p34 kernel: [ 13.105227] md: adding sdb2 ...
47 Jan 21 14:23:39 p34 kernel: [ 13.105274] md: sdb1 has different UUID to sdb2
48 Jan 21 14:23:39 p34 kernel: [ 13.105324] md: adding sda2 ...
49 Jan 21 14:23:39 p34 kernel: [ 13.105367] md: sda1 has different UUID to sdb2
50 Jan 21 14:23:39 p34 kernel: [ 13.105451] md: created md1
51 Jan 21 14:23:39 p34 kernel: [ 13.106134] md: bind<sda2>
52 Jan 21 14:23:39 p34 kernel: [ 13.106182] md: bind<sdb2>
53 Jan 21 14:23:39 p34 kernel: [ 13.106229] md: running: <sdb2><sda2>
54 Jan 21 14:23:39 p34 kernel: [ 13.106377] raid1: raid set md1 active with 2 out of 2 mirrors
55 Jan 21 14:23:39 p34 kernel: [ 13.106451] md: considering sdb1 ...
56 Jan 21 14:23:39 p34 kernel: [ 13.106500] md: adding sdb1 ...
57 Jan 21 14:23:39 p34 kernel: [ 13.106547] md: adding sda1 ...
58 Jan 21 14:23:39 p34 kernel: [ 13.106592] md: created md0
59 Jan 21 14:23:39 p34 kernel: [ 13.106635] md: bind<sda1>
60 Jan 21 14:23:39 p34 kernel: [ 13.106683] md: bind<sdb1>
61 Jan 21 14:23:39 p34 kernel: [ 13.106731] md: running: <sdb1><sda1>
62 Jan 21 14:23:39 p34 kernel: [ 13.106874] raid1: raid set md0 active with 2 out of 2 mirrors
63 Jan 21 14:23:39 p34 kernel: [ 13.106951] md: ... autorun DONE.
64 Jan 21 14:23:39 p34 kernel: [ 13.138339] UDF-fs: No VRS found
65 Jan 21 14:23:39 p34 kernel: [ 13.138555] Filesystem "md2": Disabling barriers, not supported by the underlying device
66 Jan 21 14:23:39 p34 kernel: [ 13.148565] XFS mounting filesystem md2
67 Jan 21 14:23:39 p34 kernel: [ 13.242582] Ending clean XFS mount for filesystem: md2
68 Jan 21 14:23:39 p34 kernel: [ 13.242628] VFS: Mounted root (xfs filesystem) readonly.
69 Jan 21 14:23:39 p34 kernel: [ 13.242777] Freeing unused kernel memory: 220k freed
70 Jan 21 14:23:39 p34 kernel: [ 15.524891] Adding 2200760k swap on /dev/md0. Priority:-1 extents:1 across:2200760k
71 Jan 21 14:23:39 p34 kernel: [ 16.215532] md: md3 stopped.
72 Jan 21 14:23:39 p34 kernel: [ 16.381252] md: bind<sdj1>
73 Jan 21 14:23:39 p34 kernel: [ 16.381388] md: bind<sdk1>
74 Jan 21 14:23:39 p34 kernel: [ 16.381521] md: bind<sdg1>
75 Jan 21 14:23:39 p34 kernel: [ 16.381644] md: bind<sdi1>
76 Jan 21 14:23:39 p34 kernel: [ 16.381701] raid5: device sdi1 operational as raid disk 0
77 Jan 21 14:23:39 p34 kernel: [ 16.381756] raid5: device sdg1 operational as raid disk 3
78 Jan 21 14:23:39 p34 kernel: [ 16.381807] raid5: device sdk1 operational as raid disk 2
79 Jan 21 14:23:39 p34 kernel: [ 16.381859] raid5: device sdj1 operational as raid disk 1
80 Jan 21 14:23:39 p34 kernel: [ 16.382215] raid5: allocated 4198kB for md3
81 Jan 21 14:23:39 p34 kernel: [ 16.382260] raid5: raid level 5 set md3 active with 4 out of 4 devices, algorithm 2
82 Jan 21 14:23:39 p34 kernel: [ 16.382329] RAID5 conf printout:
83 Jan 21 14:23:39 p34 kernel: [ 16.382374] --- rd:4 wd:4
84 Jan 21 14:23:39 p34 kernel: [ 16.382418] disk 0, o:1, dev:sdi1
85 Jan 21 14:23:39 p34 kernel: [ 16.382464] disk 1, o:1, dev:sdj1
86 Jan 21 14:23:39 p34 kernel: [ 16.382509] disk 2, o:1, dev:sdk1
87 Jan 21 14:23:39 p34 kernel: [ 16.382555] disk 3, o:1, dev:sdg1
88 Jan 21 14:23:39 p34 kernel: [ 16.382726] md: md4 stopped.
89 Jan 21 14:23:39 p34 kernel: [ 16.405543] md: bind<sdf1>
90 Jan 21 14:23:39 p34 kernel: [ 16.405738] md: bind<sdh1>
91 Jan 21 14:23:39 p34 kernel: [ 16.405853] md: bind<sde1>
92 Jan 21 14:23:39 p34 kernel: [ 16.406007] md: bind<sdd1>
93 Jan 21 14:23:39 p34 kernel: [ 16.406180] md: bind<sdc1>
94 Jan 21 14:23:39 p34 kernel: [ 16.406244] raid5: device sdc1 operational as raid disk 0
95 Jan 21 14:23:39 p34 kernel: [ 16.406296] raid5: device sdd1 operational as raid disk 4
96 Jan 21 14:23:39 p34 kernel: [ 16.406349] raid5: device sde1 operational as raid disk 3
97 Jan 21 14:23:39 p34 kernel: [ 16.406400] raid5: device sdh1 operational as raid disk 2
98 Jan 21 14:23:39 p34 kernel: [ 16.406462] raid5: device sdf1 operational as raid disk 1
99 Jan 21 14:23:39 p34 kernel: [ 16.406834] raid5: allocated 5238kB for md4
100 Jan 21 14:23:39 p34 kernel: [ 16.406878] raid5: raid level 5 set md4 active with 5 out of 5 devices, algorithm 2
101 Jan 21 14:23:39 p34 kernel: [ 16.406948] RAID5 conf printout:
102 Jan 21 14:23:39 p34 kernel: [ 16.406993] --- rd:5 wd:5
103 Jan 21 14:23:39 p34 kernel: [ 16.407038] disk 0, o:1, dev:sdc1
104 Jan 21 14:23:39 p34 kernel: [ 16.407083] disk 1, o:1, dev:sdf1
105 Jan 21 14:23:39 p34 kernel: [ 16.407129] disk 2, o:1, dev:sdh1
106 Jan 21 14:23:39 p34 kernel: [ 16.407174] disk 3, o:1, dev:sde1
107 Jan 21 14:23:39 p34 kernel: [ 16.407224] disk 4, o:1, dev:sdd1
108 Jan 21 14:23:39 p34 kernel: [ 16.634756] kjournald starting. Commit interval 5 seconds
109 Jan 21 14:23:39 p34 kernel: [ 16.640355] EXT3 FS on md1, internal journal
110 Jan 21 14:23:39 p34 kernel: [ 16.640433] EXT3-fs: mounted filesystem with ordered data mode.
111 Jan 21 14:23:39 p34 kernel: [ 16.658938] Filesystem "md3": Disabling barriers, not supported by the underlying device
112 Jan 21 14:23:39 p34 kernel: [ 16.659220] XFS mounting filesystem md3
113 Jan 21 14:23:39 p34 kernel: [ 16.812624] Ending clean XFS mount for filesystem: md3
114 Jan 21 14:23:39 p34 kernel: [ 16.813261] Filesystem "md4": Disabling barriers, not supported by the underlying device
115 Jan 21 14:23:39 p34 kernel: [ 16.820854] XFS mounting filesystem md4
116 Jan 21 14:23:39 p34 kernel: [ 17.042008] Ending clean XFS mount for filesystem: md4
117 Jan 21 14:23:39 p34 kernel: [ 19.707434] e1000: eth1: e1000_watchdog: NIC Link is Up 100 Mbps Full Duplex
118 Jan 21 14:23:39 p34 kernel: [ 20.528131] process `syslogd' is using obsolete setsockopt SO_BSDCOMPAT
119 Jan 21 17:13:36 p34 kernel: [10215.146216] e1000: eth2: e1000_watchdog: NIC Link is Up 100 Mbps Full Duplex
120 Jan 22 14:22:58 p34 kernel: [86358.768855] e1000: eth1: e1000_watchdog: NIC Link is Down
121 Jan 22 14:23:09 p34 kernel: [86369.004406] e1000: eth1: e1000_watchdog: NIC Link is Up 100 Mbps Full Duplex
122 Jan 22 14:52:50 p34 kernel: [88150.048381] e1000: eth0: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex
123 Jan 22 14:52:54 p34 kernel: [88153.597882] e1000: eth1: e1000_watchdog: NIC Link is Up 100 Mbps Full Duplex
124 Jan 22 14:54:52 p34 kernel: [88271.980770] e1000: eth0: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex
125 Jan 22 14:54:53 p34 kernel: [88273.091388] e1000: eth0: e1000_watchdog: NIC Link is Down
126 Jan 22 14:54:56 p34 kernel: [88275.402828] e1000: eth0: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex
127 Jan 22 14:55:32 p34 kernel: [88312.219576] e1000: eth1: e1000_watchdog: NIC Link is Up 100 Mbps Full Duplex
128 Jan 22 14:55:39 p34 kernel: [88319.117375] e1000: eth0: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex
129 Jan 22 15:03:04 p34 kernel: [88764.146367] e1000: eth1: e1000_watchdog: NIC Link is Up 100 Mbps Full Duplex
130 Jan 22 15:03:07 p34 kernel: [88766.546277] e1000: eth1: e1000_watchdog: NIC Link is Up 100 Mbps Full Duplex
131 Jan 22 15:03:08 p34 kernel: [88767.643307] e1000: eth0: e1000_watchdog: NIC Link is Up 1000 Mbps Full Duplex
132 Jan 24 18:22:21 p34 kernel: [273475.674411] SysRq : Show State
133 Jan 24 18:22:21 p34 kernel: [273475.674423]
134 Jan 24 18:22:21 p34 kernel: [273475.674424] free sibling
135 Jan 24 18:22:21 p34 kernel: [273475.674432] task PC stack pid father child younger older
136 Jan 24 18:22:21 p34 kernel: [273475.674437] init S C20FFB3C 0 1 0 2 (NOTLB)
137 Jan 24 18:22:21 p34 kernel: [273475.674458] c20ffb50 00000082 00000002 c20ffb3c c20ffb38 00000000 c0507e00 c20e6a90
138 Jan 24 18:22:21 p34 kernel: [273475.674487] c20ffb08 c048513d 00000000 0000000a 5924598e 0000f8b9 000046c3 c20e6b9c
139 Jan 24 18:22:21 p34 kernel: [273475.674503] c1fe3280 00000001 c0507e00 00000000 f7b3fc40 c0507e00 c0139537 00000202
140 Jan 24 18:22:21 p34 kernel: [273475.674519] Call Trace:
141 Jan 24 18:22:21 p34 kernel: [273475.674523] [<c0139537>] handle_edge_irq+0x67/0x130
142 Jan 24 18:22:21 p34 kernel: [273475.674531] [<c0124bc0>] lock_timer_base+0x20/0x50
143 Jan 24 18:22:21 p34 kernel: [273475.674537] [<c042099b>] schedule_timeout+0x4b/0xd0
144 Jan 24 18:22:21 p34 kernel: [273475.674544] [<c012ed7d>] add_wait_queue+0x1d/0x50
145 Jan 24 18:22:21 p34 kernel: [273475.674550] [<c0124330>] process_timeout+0x0/0x10
146 Jan 24 18:22:21 p34 kernel: [273475.674558] [<c016564e>] do_select+0x3be/0x4a0
147 Jan 24 18:22:21 p34 kernel: [273475.674566] [<c03b9cdd>] __qdisc_run+0xad/0x1c0
148 Jan 24 18:22:21 p34 kernel: [273475.674575] [<c0165d20>] __pollwait+0x0/0x100
149 Jan 24 18:22:21 p34 kernel: [273475.674583] [<c0116c10>] default_wake_function+0x0/0x10
150 Jan 24 18:22:21 p34 kernel: [273475.674590] [<c03e09c7>] tcp_cong_avoid+0x27/0x40
151 Jan 24 18:22:21 p34 kernel: [273475.674601] [<c03e2516>] tcp_ack+0xd96/0x1c30
152 Jan 24 18:22:21 p34 kernel: [273475.674611] [<c03c9f20>] tcp_packet+0x0/0xb50
153 Jan 24 18:22:21 p34 kernel: [273475.674621] [<c03e6924>] tcp_rcv_established+0x5e4/0x6f0
154 Jan 24 18:22:21 p34 kernel: [273475.674630] [<c03ec8b3>] tcp_v4_do_rcv+0xc3/0x2d0
155 Jan 24 18:22:21 p34 kernel: [273475.674638] [<c03c4dd3>] nf_iterate+0x63/0x90
156 Jan 24 18:22:21 p34 kernel: [273475.674647] [<c03eebbc>] tcp_v4_rcv+0x5dc/0x770
157 Jan 24 18:22:21 p34 kernel: [273475.674666] [<c03c5097>] nf_hook_slow+0x57/0xf0
158 Jan 24 18:22:21 p34 kernel: [273475.674675] [<c03d2d94>] ip_local_deliver+0xe4/0x210
159 Jan 24 18:22:21 p34 kernel: [273475.674684] [<c03d2540>] ip_local_deliver_finish+0x0/0x170
160 Jan 24 18:22:21 p34 kernel: [273475.674699] [<c03d2a4c>] ip_rcv+0x29c/0x500
161 Jan 24 18:22:21 p34 kernel: [273475.674708] [<c03d22a0>] ip_rcv_finish+0x0/0x2a0
162 Jan 24 18:22:21 p34 kernel: [273475.674718] [<c02e75e6>] e1000_alloc_rx_buffers+0x96/0x370
163 Jan 24 18:22:21 p34 kernel: [273475.674729] [<c0169ce7>] __d_lookup+0x137/0x150
164 Jan 24 18:22:21 p34 kernel: [273475.674738] [<c016a258>] dput+0x18/0x150
165 Jan 24 18:22:21 p34 kernel: [273475.674746] [<c01600c5>] do_lookup+0x65/0x190
166 Jan 24 18:22:21 p34 kernel: [273475.674754] [<c016a258>] dput+0x18/0x150
167 Jan 24 18:22:21 p34 kernel: [273475.674760] [<c0161f84>] __link_path_walk+0xb04/0xc90
168 Jan 24 18:22:21 p34 kernel: [273475.674768] [<c0165908>] core_sys_select+0x1d8/0x2f0
169 Jan 24 18:22:21 p34 kernel: [273475.674776] [<c01623c6>] do_path_lookup+0x86/0x1d0
170 Jan 24 18:22:21 p34 kernel: [273475.674785] [<c01612e3>] getname+0xb3/0xe0
171 Jan 24 18:22:21 p34 kernel: [273475.674792] [<c015c148>] cp_new_stat64+0xf8/0x110
172 Jan 24 18:22:21 p34 kernel: [273475.674799] [<c0166172>] sys_select+0xe2/0x1a0
173 Jan 24 18:22:21 p34 kernel: [273475.674805] [<c0103138>] syscall_call+0x7/0xb
174 Jan 24 18:22:21 p34 kernel: [273475.674812] =======================
175 Jan 24 18:22:21 p34 kernel: [273475.674816] migration/0 S C2103F74 0 2 1 3 (L-TLB)
176 Jan 24 18:22:21 p34 kernel: [273475.674832] c2103f88 00000046 00000002 c2103f74 c2103f70 00000000 00000001 c20e6030
177 Jan 24 18:22:21 p34 kernel: [273475.674861] 04cbc7b6 .817293] [<c0114471>] __activate_task+0x21/0x40
178 Jan 24 18:22:21 p34 kernel: [273475.817297] [<c0116891>] try_to_wake_up+0x41/0x3c0
179 Jan 24 18:22:21 p34 kernel: [273475.817302] [<c0240d37>] xlog_assign_tail_lsn+0x47/0x60
180 Jan 24 18:22:21 p34 kernel: [273475.817306] [<c0240dbc>] xlog_state_release_iclog+0x6c/0x590
181 Jan 24 18:22:21 p34 kernel: [273475.817311] [<c0114229>] __wake_up_common+0x39/0x70
182 Jan 24 18:22:21 p34 kernel: [273475.817316] [<c0114849>] task_rq_lock+0x39/0x70
183 Jan 24 18:22:21 p34 kernel: [273475.817320] [<c0116891>] try_to_wake_up+0x41/0x3c0
184 Jan 24 18:22:21 p34 kernel: [273475.817485] [<c03a581b>] sock_def_readable+0x7b/0x80
185 Jan 24 18:22:21 p34 kernel: [273475.817490] [<c0155bad>] cache_alloc_refill+0x16d/0x500
186 Jan 24 18:22:21 p34 kernel: [273475.817495] [<c01600c5>] do_lookup+0x65/0x190
187 Jan 24 18:22:21 p34 kernel: [273475.817499] [<c0114229>] __wake_up_common+0x39/0x70
188 Jan 24 18:22:21 p34 kernel: [273475.817504] [<c01147b8>] __wake_up+0x38/0x50
189 Jan 24 18:22:21 p34 kernel: [273475.817508] [<c03a581b>] sock_def_readable+0x7b/0x80
190 Jan 24 18:22:21 p34 kernel: [273475.817513] [<c0165908>] core_sys_select+0x1d8/0x2f0
191 Jan 24 18:22:21 p34 kernel: [273475.817518] [<c03a142a>] sock_aio_write+0xea/0x110
192 Jan 24 18:22:21 p34 kernel: [273475.817522] [<c0225dd0>] xfs_dir2_put_dirent64_direct+0x0/0x90
193 Jan 24 18:22:21 p34 kernel: [273475.817527] [<c0225dd0>] xfs_dir2_put_dirent64_direct+0x0/0x90
194 Jan 24 18:22:21 p34 kernel: [273475.817532] [<c022568e>] xfs_dir_getdents+0x11e/0x150
195 Jan 24 18:22:21 p34 kernel: [273475.817536] [<c0225dd0>] xfs_dir2_put_dirent64_direct+0x0/0x90
196 Jan 24 18:22:21 p34 kernel: [273475.817541] [<c02351c5>] xfs_iunlock+0x85/0xa0
197 Jan 24 18:22:21 p34 kernel: [273475.817545] [<c0254767>] xfs_readdir+0x57/0x70
198 Jan 24 18:22:21 p34 kernel: [273475.817549] [<c025fe3f>] xfs_file_readdir+0x1ff/0x240
199 Jan 24 18:22:21 p34 kernel: [273475.817554] [<c0164860>] filldir64+0x0/0xe0
200 Jan 24 18:22:21 p34 kernel: [273475.817718] [<c0166172>] sys_select+0xe2/0x1a0
201 Jan 24 18:22:21 p34 kernel: [273475.817723] [<c0103138>] syscall_call+0x7/0xb
202 Jan 24 18:22:21 p34 kernel: [273475.817727] =======================
203 Jan 24 18:22:21 p34 kernel: [273475.817729] smbd S CB383B3C 0 14934 2224 2236 (NOTLB)
204 Jan 24 18:22:21 p34 kernel: [273475.817738] cb383b50 00000086 00000002 cb383b3c cb383b38 00000000 c054c300 c4d24a90
205 Jan 24 18:22:21 p34 kernel: [273475.817749] c01202d2 0000000a 00000202 00000007 963adcb3 0000f8b6 0000dcff c4d24b9c
206 Jan 24 18:22:21 p34 kernel: [273475.817973] c1fe3280 00000001 c03ca36b 19bfcc00 c0d37ac0 ffffffff 00000000 00000202
207 Jan 24 18:22:21 p34 kernel: [273475.817985] Call Trace:
208 Jan 24 18:22:21 p34 kernel: [273475.817987] [<c01202d2>] __do_softirq+0x82/0xf0
209 Jan 24 18:22:21 p34 kernel: [273475.817992] [<c03ca36b>] tcp_packet+0x44b/0xb50
210 Jan 24 18:22:21 p34 kernel: [273475.817997] [<c0124bc0>] lock_timer_base+0x20/0x50
211 Jan 24 18:22:21 p34 kernel: [273475.818001] [<c042099b>] schedule_timeout+0x4b/0xd0
212 Jan 24 18:22:21 p34 kernel: [273475.818006] [<c012ed7d>] add_wait_queue+0x1d/0x50
213 Jan 24 18:22:21 p34 kernel: [273475.818011] [<c0124330>] process_timeout+0x0/0x10
214 Jan 24 18:22:21 p34 kernel: [273475.818015] [<c016564e>] do_select+0x3be/0x4a0
215 Jan 24 18:22:21 p34 kernel: [273475.818020] [<c0165d20>] __pollwait+0x0/0x100
216 Jan 24 18:22:21 p34 kernel: [273475.818131] [<c0116c10>] default_wake_function+0x0/0x10
217 Jan 24 18:22:21 p34 kernel: [273475.818135] [<c0116c10>] default_wake_function+0x0/0x10
218 Jan 24 18:22:21 p34 kernel: [273475.818139] [<c03ac953>] dev_hard_start_xmit+0x203/0x2d0
219 Jan 24 18:22:21 p34 kernel: [273475.818144] [<c03b9cdd>] __qdisc_run+0xad/0x1c0
220 Jan 24 18:22:21 p34 kernel: [273475.818148] [<c03ae411>] dev_queue_xmit+0xb1/0x320
221 Jan 24 18:22:21 p34 kernel: [273475.818153] [<c03d8147>] ip_output+0x137/0x270
222 Jan 24 18:22:21 p34 kernel: [273475.818157] [<c03d6b50>] ip_finish_output+0x0/0x1d0
223 Jan 24 18:22:21 p34 kernel: [273475.818161] [<c03d5210>] dst_output+0x0/0x10
224 Jan 24 18:22:21 p34 kernel: [273475.818166] [<c03d767f>] ip_queue_xmit+0x1bf/0x480
225 Jan 24 18:22:21 p34 kernel: [273475.818170] [<c03d5210>] dst_output+0x0/0x10
226 Jan 24 18:22:21 p34 kernel: [273475.818174] [<c0216399>] xfs_bmap_search_extents+0x69/0x140
227 Jan 24 18:22:21 p34 kernel: [273475.818179] [<c03a74b8>] kfree_skbmem+0x8/0x80
228 Jan 24 18:22:21 p34 kernel: [273475.818239] [<c02e282c>] e1000_unmap_and_free_tx_resource+0x1c/0x30
229 Jan 24 18:22:21 p34 kernel: [273475.818244] [<c03e7658>] tcp_transmit_skb+0x3a8/0x710
230 Jan 24 18:22:21 p34 kernel: [273475.818249] [<c0124bc0>] lock_timer_base+0x20/0x50
231 Jan 24 18:22:21 p34 kernel: [273475.818253] [<c0124cf8>] __mod_timer+0x98/0xb0
232 Jan 24 18:22:21 p34 kernel: [273475.818257] [<c03a408c>] sk_reset_timer+0xc/0x20
233 Jan 24 18:22:21 p34 kernel: [273475.818262] [<c03e8f57>] __tcp_push_pending_frames+0x127/0x8a0
234 Jan 24 18:22:21 p34 kernel: [273475.818374] [<c02e78c0>] e1000_clean_rx_irq+0x0/0x4b0
235 Jan 24 18:22:21 p34 kernel: [273475.818378] [<c02e6b1e>] e1000_clean+0x1be/0x2b0
236 Jan 24 18:22:21 p34 kernel: [273475.818382] [<c03a3ef3>] release_sock+0x13/0xc0
237 Jan 24 18:22:21 p34 kernel: [273475.818387] [<c03de01f>] tcp_sendmsg+0x77f/0xb30
238 Jan 24 18:22:21 p34 kernel: [273475.818392] [<c0165908>] core_sys_select+0x1d8/0x2f0
239 Jan 24 18:22:21 p34 kernel: [273475.818397] [<c03a142a>] sock_aio_write+0xea/0x110
240 Jan 24 18:22:21 p34 kernel: [273475.818401] [<c026236e>] xfs_vn_getattr+0x3e/0x120
241 Jan 24 18:22:21 p34 kernel: [273475.818406] [<c01247f2>] do_timer+0x4a2/0x840
242 Jan 24 18:22:21 p34 kernel: [273475.818411] [<c0166172>] sys_select+0xe2/0x1a0
243 Jan 24 18:22:21 p34 kernel: [273475.818415] [<c0139537>] handle_edge_irq+0x67/0x130
244 Jan 24 18:22:21 p34 kernel: [273475.818420] [<c0103138>] syscall_call+0x7/0xb
245 Jan 24 18:22:21 p34 kernel: [273475.818424] =======================
246 Jan 24 18:22:21 p34 kernel: [273475.818426] sshd S F72D7DA4 0 15052 2263 15054 16808 10840 (NOTLB)
247 Jan 24 18:22:21 p34 kernel: [273475.818647] f72d7db8 00000082 00000002 f72d7da4 f72d7da0 00000000 00000040 c649c030
248 Jan 24 18:22:21 p34 kernel: [273475.818658] 00000000 00000000 00000000 00000008 f24d51ad 0000f4b1 00000595 c649c13c
249 Jan 24 18:22:21 p34 kernel: [273475.818669] c1fe3280 00000001 01b42f73 f7bd1e84 ccf2b780 f7a2584c f72d7e08 c016a258
250 Jan 24 18:22:21 p34 kernel: [273475.818682] Call Trace:
251 Jan 24 18:22:21 p34 kernel: [273475.818685] [<c016a258>] dput+0x18/0x150
252 Jan 24 18:22:21 p34 kernel: [273475.818689] [<c04209c6>] schedule_timeout+0x76/0xd0
253 Jan 24 18:22:21 p34 kernel: [273475.818853] [<c0161f84>] __link_path_walk+0xb04/0xc90
254 Jan 24 18:22:21 p34 kernel: [273475.818858] [<c012ec70>] prepare_to_wait+0x20/0x70
255 Jan 24 18:22:21 p34 kernel: [273475.818862] [<c0409218>] unix_stream_recvmsg+0x358/0x500
256 Jan 24 18:22:21 p34 kernel: [273475.818868] [<c0162175>] link_path_walk+0x65/0xc0
257 Jan 24 18:22:21 p34 kernel: [273475.818872] [<c012eac0>] autoremove_wake_function+0x0/0x50
258 Jan 24 18:22:21 p34 kernel: [273475.818877] [<c03a1538>] sock_aio_read+0xe8/0x100
259 Jan 24 18:22:21 p34 kernel: [273475.818881] [<c01623c6>] do_path_lookup+0x86/0x1d0
260 Jan 24 18:22:21 p34 kernel: [273475.818886] [<c0158e37>] do_sync_read+0xc7/0x110
261 Jan 24 18:22:21 p34 kernel: [273475.818890] [<c015b7da>] chrdev_open+0x7a/0x150
262 Jan 24 18:22:21 p34 kernel: [273475.818895] [<c012eac0>] autoremove_wake_function+0x0/0x50
263 Jan 24 18:22:21 p34 kernel: [273475.818899] [<c01597ff>] vfs_read+0x14f/0x160
264 Jan 24 18:22:21 p34 kernel: [273475.818904] [<c0159cc1>] sys_read+0x41/0x70
265 Jan 24 18:22:21 p34 kernel: [273475.818908] [<c0103138>] syscall_call+0x7/0xb
266 Jan 24 18:22:21 p34 kernel: [273475.818912] =======================
267 Jan 24 18:22:21 p34 kernel: [273475.818915] sshd S CF43BB3C 0 15054 15052 15055 (NOTLB)
268 Jan 24 18:22:21 p34 kernel: [273475.819085] cf43bb50 00000086 00000002 cf43bb3c cf43bb38 00000000 00000000 f52a6030
269 Jan 24 18:22:21 p34 kernel: [273475.819096] 3d9c0821 0000f8b0 00000202 00000009 3d9e5717 0000f8b0 00007a29 f52a613c
270 Jan 24 18:22:21 p34 kernel: [273475.819108] c1fe3280 00000001 c03ca36b 19bfcc00 ca9556c0 00000003 c04df568 00000202
271 Jan 24 18:22:21 p34 kernel: [273475.819327] Call Trace:
272 Jan 24 18:22:21 p34 kernel: [273475.819330] [<c03ca36b>] tcp_packet+0x44b/0xb50
273 Jan 24 18:22:21 p34 kernel: [273475.819335] [<c0124bc0>] lock_timer_base+0x20/0x50
274 Jan 24 18:22:21 p34 kernel: [273475.819339] [<c0124cf8>] __mod_timer+0x98/0xb0
275 Jan 24 18:22:21 p34 kernel: [273475.819343] [<c042099b>] schedule_timeout+0x4b/0xd0
276 Jan 24 18:22:21 p34 kernel: [273475.819348] [<c0124330>] process_timeout+0x0/0x10
277 Jan 24 18:22:21 p34 kernel: [273475.819409] [<c016564e>] do_select+0x3be/0x4a0
278 Jan 24 18:22:21 p34 kernel: [273475.819414] [<c0165d20>] __pollwait+0x0/0x100
279 Jan 24 18:22:21 p34 kernel: [273475.819418] [<c0116c10>] default_wake_function+0x0/0x10
280 Jan 24 18:22:21 p34 kernel: [273475.819422] [<c0116c10>] default_wake_function+0x0/0x10
281 Jan 24 18:22:21 p34 kernel: [273475.819427] [<c0116c10>] default_wake_function+0x0/0x10
282 Jan 24 18:22:21 p34 kernel: [273475.819431] [<c0116c10>] default_wake_function+0x0/0x10
283 Jan 24 18:22:21 p34 kernel: [273475.819435] [<c03ae411>] dev_queue_xmit+0xb1/0x320
284 Jan 24 18:22:21 p34 kernel: [273475.819440] [<c03d8147>] ip_output+0x137/0x270
285 Jan 24 18:22:21 p34 kernel: [273475.819444] [<c03d6b50>] ip_finish_output+0x0/0x1d0
286 Jan 24 18:22:21 p34 kernel: [273475.819449] [<c03d5210>] dst_output+0x0/0x10
287 Jan 24 18:22:21 p34 kernel: [273475.819453] [<c03d767f>] ip_queue_xmit+0x1bf/0x480
288 Jan 24 18:22:21 p34 kernel: [273475.819565] [<c03d5210>] dst_output+0x0/0x10
289 Jan 24 18:22:21 p34 kernel: [273475.819570] [<c0105b65>] do_IRQ+0x45/0x80
290 Jan 24 18:22:21 p34 kernel: [273475.819574] [<c04221e7>] nmi_stack_correct+0x26/0x2b
291 Jan 24 18:22:21 p34 kernel: [273475.819578] [<c03e09c7>] tcp_cong_avoid+0x27/0x40
292 Jan 24 18:22:21 p34 kernel: [273475.819583] [<c03e7658>] tcp_transmit_skb+0x3a8/0x710
293 Jan 24 18:22:21 p34 kernel: [273475.819587] [<c0124bc0>] lock_timer_base+0x20/0x50
294 Jan 24 18:22:21 p34 kernel: [273475.819591] [<c0124cf8>] __mod_timer+0x98/0xb0
295 Jan 24 18:22:21 p34 kernel: [273475.819595] [<c03a408c>] sk_reset_timer+0xc/0x20
296 Jan 24 18:22:21 p34 kernel: [273475.819600] [<c03e8f57>] __tcp_push_pending_frames+0x127/0x8a0
297 Jan 24 18:22:21 p34 kernel: [273475.819605] [<c03a3ef3>] release_sock+0x13/0xc0
298 Jan 24 18:22:21 p34 kernel: [273475.819609] [<c03de01f>] tcp_sendmsg+0x77f/0xb30
299 Jan 24 18:22:21 p34 kernel: [273475.819614] [<c03c4dd3>] nf_iterate+0x63/0x90
300 Jan 24 18:22:21 p34 kernel: [273475.819618] [<c0165908>] core_sys_select+0x1d8/0x2f0
301 Jan 24 18:22:21 p34 kernel: [273475.819623] [<c03a142a>] sock_aio_write+0xea/0x110
302 Jan 24 18:22:21 p34 kernel: [273475.819628] [<c0158d27>] do_sync_write+0xc7/0x110
303 Jan 24 18:22:21 p34 kernel: [273475.819633] [<c012eac0>] autoremove_wake_function+0x0/0x50
304 Jan 24 18:22:21 p34 kernel: [273475.819637] [<c02e6b1e>] e1000_clean+0x1be/0x2b0
305 Jan 24 18:22:21 p34 kernel: [273475.819852] [<c0166172>] sys_select+0xe2/0x1a0
306 Jan 24 18:22:21 p34 kernel: [273475.819857] [<c0103138>] syscall_call+0x7/0xb
307 Jan 24 18:22:21 p34 kernel: [273475.819861] =======================
308 Jan 24 18:22:21 p34 kernel: [273475.819863] bash S C041FAFB 0 15055 15054 16795 (NOTLB)
309 Jan 24 18:22:21 p34 kernel: [273475.819872] c4c4bf24 00000086 f7eca000 c041fafb cbd81000 cbd81000 c04d442c c2aafa90
310 Jan 24 18:22:21 p34 kernel: [273475.819883] c04d4428 00000000 c1875ce0 00000008 81e9dab1 0000f8a6 00010e63 c2aafb9c
311 Jan 24 18:22:21 p34 kernel: [273475.820055] c1fdf080 00000000 fffb9000 00000086 c3afe0c0 f796e408 00000000 c4c4bf28
312 Jan 24 18:22:21 p34 kernel: [273475.820067] Call Trace:
313 Jan 24 18:22:21 p34 kernel: [273475.820069] [<c041fafb>] __sched_text_start+0x31b/0x950
314 Jan 24 18:22:21 p34 kernel: [273475.820075] [<c012eadb>] autoremove_wake_function+0x1b/0x50
315 Jan 24 18:22:21 p34 kernel: [273475.820080] [<c0114229>] __wake_up_common+0x39/0x70
316 Jan 24 18:22:21 p34 kernel: [273475.820084] [<c011d9ec>] do_wait+0x1cc/0xb30
317 Jan 24 18:22:21 p34 kernel: [273475.820089] [<c01147b8>] __wake_up+0x38/0x50
318 Jan 24 18:22:21 p34 kernel: [273475.820093] [<c015ecb6>] pipe_release+0x86/0xb0
319 Jan 24 18:22:21 p34 kernel: [273475.820097] [<c016453f>] do_ioctl+0x7f/0x90
320 Jan 24 18:22:21 p34 kernel: [273475.820102] [<c0116c10>] default_wake_function+0x0/0x10
321 Jan 24 18:22:21 p34 kernel: [273475.820266] [<c011e381>] sys_wait4+0x31/0x40
322 Jan 24 18:22:21 p34 kernel: [273475.820270] [<c011e3b5>] sys_waitpid+0x25/0x30
323 Jan 24 18:22:21 p34 kernel: [273475.820274] [<c0103138>] syscall_call+0x7/0xb
324 Jan 24 18:22:21 p34 kernel: [273475.820279] [<c0420033>] __sched_text_start+0x853/0x950
325 Jan 24 18:22:21 p34 kernel: [273475.820283] =======================
326 Jan 24 18:22:21 p34 kernel: [273475.820286] pickup S C42A1B3C 0 16683 14490 13731 (NOTLB)
327 Jan 24 18:22:21 p34 kernel: [273475.820295] c42a1b50 00200082 00000002 c42a1b3c c42a1b38 00000000 c21bb980 c2176a90
328 Jan 24 18:22:21 p34 kernel: [273475.820306] 9cbf1354 0000f876 c393d900 00000007 7aad3a21 0000f8ae 000003ab c2176b9c
329 Jan 24 18:22:21 p34 kernel: [273475.820477] c1fe3280 00000001 00002127 00000000 c8660180 f7a475e0 00000015 00200202
330 Jan 24 18:22:21 p34 kernel: [273475.820489] Call Trace:
331 Jan 24 18:22:21 p34 kernel: [273475.820492] [<c0124bc0>] lock_timer_base+0x20/0x50
332 Jan 24 18:22:21 p34 kernel: [273475.820497] [<c0124cf8>] __mod_timer+0x98/0xb0
333 Jan 24 18:22:21 p34 kernel: [273475.820501] [<c042099b>] schedule_timeout+0x4b/0xd0
334 Jan 24 18:22:21 p34 kernel: [273475.820505] [<c012ed7d>] add_wait_queue+0x1d/0x50
335 Jan 24 18:22:21 p34 kernel: [273475.820510] [<c0124330>] process_timeout+0x0/0x10
336 Jan 24 18:22:21 p34 kernel: [273475.820514] [<c016564e>] do_select+0x3be/0x4a0
337 Jan 24 18:22:21 p34 kernel: [273475.820519] [<c0165d20>] __pollwait+0x0/0x100
338 Jan 24 18:22:21 p34 kernel: [273475.820736] [<c0116c10>] default_wake_function+0x0/0x10
339 Jan 24 18:22:21 p34 kernel: [273475.820740] [<c0116c10>] default_wake_function+0x0/0x10
340 Jan 24 18:22:21 p34 kernel: [273475.820745] [<c013fa67>] __alloc_pages+0x57/0x2f0
341 Jan 24 18:22:21 p34 kernel: [273475.820749] [<c0236b35>] xfs_iext_bno_to_ext+0x95/0x1f0
342 Jan 24 18:22:21 p34 kernel: [273475.820753] [<c0147f04>] __handle_mm_fault+0x4e4/0x900
343 Jan 24 18:22:21 p34 kernel: [273475.820758] [<c0236b35>] xfs_iext_bno_to_ext+0x95/0x1f0
344 Jan 24 18:22:21 p34 kernel: [273475.820762] [<c02162cb>] xfs_bmap_search_multi_extents+0x7b/0xe0
345 Jan 24 18:22:21 p34 kernel: [273475.820767] [<c0216399>] xfs_bmap_search_extents+0x69/0x140
346 Jan 24 18:22:21 p34 kernel: [273475.820772] [<c01899fe>] proc_alloc_inode+0x3e/0x70
347 Jan 24 18:22:21 p34 kernel: [273475.820777] [<c01899fe>] proc_alloc_inode+0x3e/0x70
348 Jan 24 18:22:21 p34 kernel: [273475.820781] [<c0114471>] __activate_task+0x21/0x40
349 Jan 24 18:22:21 p34 kernel: [273475.820786] [<c0116891>] try_to_wake_up+0x41/0x3c0
350 Jan 24 18:22:21 p34 kernel: [273475.820790] [<c0169ce7>] __d_lookup+0x137/0x150
351 Jan 24 18:22:21 p34 kernel: [273475.820794] [<c0155bad>] cache_alloc_refill+0x16d/0x500
352 Jan 24 18:22:21 p34 kernel: [273475.820799] [<c03a7d45>] __alloc_skb+0x55/0x110
353 Jan 24 18:22:21 p34 kernel: [273475.820803] [<c03a435e>] sock_alloc_send_skb+0x16e/0x1c0
354 Jan 24 18:22:21 p34 kernel: [273475.820808] [<c03a57be>] sock_def_readable+0x1e/0x80
355 Jan 24 18:22:21 p34 kernel: [273475.820813] [<c03a65ac>] skb_queue_tail+0x1c/0x50
356 Jan 24 18:22:21 p34 kernel: [273475.820818] [<c0165908>] core_sys_select+0x1d8/0x2f0
357 Jan 24 18:22:21 p34 kernel: [273475.820932] [<c03a142a>] sock_aio_write+0xea/0x110
358 Jan 24 18:22:21 p34 kernel: [273475.820936] [<c0158d27>] do_sync_write+0xc7/0x110
359 Jan 24 18:22:21 p34 kernel: [273475.820941] [<c0123e36>] getnstimeofday+0x36/0xd0
360 Jan 24 18:22:21 p34 kernel: [273475.820945] [<c0123e36>] getnstimeofday+0x36/0xd0
361 Jan 24 18:22:21 p34 kernel: [273475.820949] [<c0131848>] enqueue_hrtimer+0x58/0x90
362 Jan 24 18:22:21 p34 kernel: [273475.820954] [<c0131bb9>] hrtimer_start+0xa9/0xf0
363 Jan 24 18:22:21 p34 kernel: [273475.820959] [<c011f2dc>] do_setitimer+0x14c/0x4e0
364 Jan 24 18:22:21 p34 kernel: [273475.820963] [<c0166172>] sys_select+0xe2/0x1a0
365 Jan 24 18:22:21 p34 kernel: [273475.820968] [<c0103138>] syscall_call+0x7/0xb
366 Jan 24 18:22:21 p34 kernel: [273475.820972] [<c02effa3>] e1000_init_hw+0x723/0xb70
367 Jan 24 18:22:21 p34 kernel: [273475.820977] =======================
368 Jan 24 18:22:21 p34 kernel: [273475.820979] pine S F5957B3C 0 16771 24342 (NOTLB)
369 Jan 24 18:22:21 p34 kernel: [273475.820988] f5957b50 00000086 00000002 f5957b3c f5957b38 00000000 cc52f740 f5839030
370 Jan 24 18:22:21 p34 kernel: [273475.821159] 946c91c2 0000f8b7 f5957c5c 00000009 9d8958f6 0000f8b7 000006bf f583913c
371 Jan 24 18:22:21 p34 kernel: [273475.821171] c1fe3280 00000001 000fc050 00000000 cb1f8100 00000003 c04df568 00000202
372 Jan 24 18:22:21 p34 kernel: [273475.821445] Call Trace:
373 Jan 24 18:22:21 p34 kernel: [273475.821448] [<c0124bc0>] lock_timer_base+0x20/0x50
374 Jan 24 18:22:21 p34 kernel: [273475.821452] [<c042099b>] schedule_timeout+0x4b/0xd0
375 Jan 24 18:22:21 p34 kernel: [273475.821457] [<c0124330>] process_timeout+0x0/0x10
376 Jan 24 18:22:21 p34 kernel: [273475.821461] [<c016564e>] do_select+0x3be/0x4a0
377 Jan 24 18:22:21 p34 kernel: [273475.821466] [<c0165d20>] __pollwait+0x0/0x100
378 Jan 24 18:22:21 p34 kernel: [273475.821471] [<c0116c10>] default_wake_function+0x0/0x10
379 Jan 24 18:22:21 p34 kernel: [273475.821475] [<c0116c10>] default_wake_function+0x0/0x10
380 Jan 24 18:22:21 p34 kernel: [273475.821479] [<c0131975>] hrtimer_run_queues+0xf5/0x150
381 Jan 24 18:22:21 p34 kernel: [273475.821484] [<c024efa9>] xfs_trans_tail_ail+0x39/0x50
382 Jan 24 18:22:21 p34 kernel: [273475.821488] [<c0240d37>] xlog_assign_tail_lsn+0x47/0x60
383 Jan 24 18:22:21 p34 kernel: [273475.821493] [<c0236b35>] xfs_iext_bno_to_ext+0x95/0x1f0
384 Jan 24 18:22:21 p34 kernel: [273475.821498] [<c03c69ce>] __nf_ct_refresh_acct+0x1e/0xa0
385 Jan 24 18:22:21 p34 kernel: [273475.821502] [<c03ca36b>] tcp_packet+0x44b/0xb50
386 Jan 24 18:22:21 p34 kernel: [273475.821507] [<c02162cb>] xfs_bmap_search_multi_extents+0x7b/0xe0
387 Jan 24 18:22:21 p34 kernel: [273475.821512] [<c03e09c7>] tcp_cong_avoid+0x27/0x40
388 Jan 24 18:22:21 p34 kernel: [273475.821516] [<c03e2516>] tcp_ack+0xd96/0x1c30
389 Jan 24 18:22:21 p34 kernel: [273475.821682] [<c0114471>] __activate_task+0x21/0x40
390 Jan 24 18:22:21 p34 kernel: [273475.821686] [<c0116891>] try_to_wake_up+0x41/0x3c0
391 Jan 24 18:22:21 p34 kernel: [273475.821691] [<c0114229>] __wake_up_common+0x39/0x70
392 Jan 24 18:22:21 p34 kernel: [273475.821696] [<c01147b8>] __wake_up+0x38/0x50
393 Jan 24 18:22:21 p34 kernel: [273475.821700] [<c02ae094>] n_tty_receive_buf+0x324/0x1070
394 Jan 24 18:22:21 p34 kernel: [273475.821704] [<c0165908>] core_sys_select+0x1d8/0x2f0
395 Jan 24 18:22:21 p34 kernel: [273475.821709] [<c03d2d94>] ip_local_deliver+0xe4/0x210
396 Jan 24 18:22:21 p34 kernel: [273475.821713] [<c03d2a4c>] ip_rcv+0x29c/0x500
397 Jan 24 18:22:21 p34 kernel: [273475.821718] [<c03d22a0>] ip_rcv_finish+0x0/0x2a0
398 Jan 24 18:22:21 p34 kernel: [273475.821722] [<c02e75e6>] e1000_alloc_rx_buffers+0x96/0x370
399 Jan 24 18:22:21 p34 kernel: [273475.821726] [<c03d27b0>] ip_rcv+0x0/0x500
400 Jan 24 18:22:21 p34 kernel: [273475.821731] [<c02e7b52>] e1000_clean_rx_irq+0x292/0x4b0
401 Jan 24 18:22:21 p34 kernel: [273475.821735] [<c01147b8>] __wake_up+0x38/0x50
402 Jan 24 18:22:21 p34 kernel: [273475.821739] [<c02a98a4>] tty_ldisc_deref+0x44/0x70
403 Jan 24 18:22:21 p34 kernel: [273475.821744] [<c02ab2c5>] tty_write+0x1a5/0x1f0
404 Jan 24 18:22:21 p34 kernel: [273475.821748] [<c0166172>] sys_select+0xe2/0x1a0
405 Jan 24 18:22:21 p34 kernel: [273475.821753] [<c0103138>] syscall_call+0x7/0xb
406 Jan 24 18:22:21 p34 kernel: [273475.821757] =======================
407 Jan 24 18:22:21 p34 kernel: [273475.821921] pine S C9EF3B3C 0 16773 24341 (NOTLB)
408 Jan 24 18:22:21 p34 kernel: [273475.821929] c9ef3b50 00000086 00000002 c9ef3b3c c9ef3b38 00000000 c025b00e cf731a90
409 Jan 24 18:22:21 p34 kernel: [273475.821940] f7a215a8 f5359cd0 00000002 00000007 a46dbe1e 0000f8b8 000079f8 cf731b9c
410 Jan 24 18:22:21 p34 kernel: [273475.821952] c1fdf080 00000000 00000000 0a200001 c0d37cc0 00000003 c04df568 00000202
411 Jan 24 18:22:21 p34 kernel: [273475.822123] Call Trace:
412 Jan 24 18:22:21 p34 kernel: [273475.822126] [<c025b00e>] kmem_zone_alloc+0x4e/0xc0
413 Jan 24 18:22:21 p34 kernel: [273475.822130] [<c0124bc0>] lock_timer_base+0x20/0x50
414 Jan 24 18:22:21 p34 kernel: [273475.822135] [<c042099b>] schedule_timeout+0x4b/0xd0
415 Jan 24 18:22:21 p34 kernel: [273475.822139] [<c0124330>] process_timeout+0x0/0x10
416 Jan 24 18:22:21 p34 kernel: [273475.822144] [<c016564e>] do_select+0x3be/0x4a0
417 Jan 24 18:22:21 p34 kernel: [273475.822149] [<c0165d20>] __pollwait+0x0/0x100
418 Jan 24 18:22:21 p34 kernel: [273475.822153] [<c0116c10>] default_wake_function+0x0/0x10
419 Jan 24 18:22:21 p34 kernel: [273475.822158] [<c0116c10>] default_wake_function+0x0/0x10
420 Jan 24 18:22:21 p34 kernel: [273475.822162] [<c025e895>] _xfs_buf_find+0xc5/0x200
421 Jan 24 18:22:21 p34 kernel: [273475.822166] [<c025b00e>] kmem_zone_alloc+0x4e/0xc0
422 Jan 24 18:22:21 p34 kernel: [273475.822171] [<c025ec87>] xfs_buf_get_flags+0x2b7/0x4e0
423 Jan 24 18:22:21 p34 kernel: [273475.822176] [<c025b00e>] kmem_zone_alloc+0x4e/0xc0
424 Jan 24 18:22:21 p34 kernel: [273475.822180] [<c025eecc>] xfs_buf_read_flags+0x1c/0x90
425 Jan 24 18:22:21 p34 kernel: [273475.822343] [<c0221abd>] xfs_da_buf_make+0xed/0x140
426 Jan 24 18:22:21 p34 kernel: [273475.822348] [<c02222d7>] xfs_da_do_buf+0x7c7/0x8e0
427 Jan 24 18:22:21 p34 kernel: [273475.822352] [<c024eaa0>] _xfs_trans_commit+0x8c0/0xa30
428 Jan 24 18:22:21 p34 kernel: [273475.822357] [<c0114471>] __activate_task+0x21/0x40
429 Jan 24 18:22:21 p34 kernel: [273475.822362] [<c0116891>] try_to_wake_up+0x41/0x3c0
430 Jan 24 18:22:21 p34 kernel: [273475.822366] [<c0114229>] __wake_up_common+0x39/0x70
431 Jan 24 18:22:21 p34 kernel: [273475.822371] [<c01147b8>] __wake_up+0x38/0x50
432 Jan 24 18:22:21 p34 kernel: [273475.822375] [<c02ae094>] n_tty_receive_buf+0x324/0x1070
433 Jan 24 18:22:21 p34 kernel: [273475.822379] [<c0165908>] core_sys_select+0x1d8/0x2f0
434 Jan 24 18:22:21 p34 kernel: [273475.822384] [<c013f91b>] get_page_from_freelist+0x2ab/0x3a0
435 Jan 24 18:22:21 p34 kernel: [273475.822389] [<c026236e>] xfs_vn_getattr+0x3e/0x120
436 Jan 24 18:22:21 p34 kernel: [273475.822394] [<c0123e36>] getnstimeofday+0x36/0xd0
437 Jan 24 18:22:21 p34 kernel: [273475.822398] [<c0131d92>] ktime_get_ts+0x22/0x60
438 Jan 24 18:22:21 p34 kernel: [273475.822402] [<c0131de6>] ktime_get+0x16/0x40
439 Jan 24 18:22:21 p34 kernel: [273475.822618] [<c011fd68>] ns_to_timeval+0x18/0x50
440 Jan 24 18:22:21 p34 kernel: [273475.822623] [<c01319f7>] lock_hrtimer_base+0x27/0x60
441 Jan 24 18:22:21 p34 kernel: [273475.822627] [<c0131ad3>] hrtimer_try_to_cancel+0x33/0x50
442 Jan 24 18:22:21 p34 kernel: [273475.822632] [<c011f2dc>] do_setitimer+0x14c/0x4e0
443 Jan 24 18:22:21 p34 kernel: [273475.822636] [<c0166172>] sys_select+0xe2/0x1a0
444 Jan 24 18:22:21 p34 kernel: [273475.822641] [<c011f740>] alarm_setitimer+0x30/0x70
445 Jan 24 18:22:21 p34 kernel: [273475.822645] [<c0103138>] syscall_call+0x7/0xb
446 Jan 24 18:22:21 p34 kernel: [273475.822650] =======================
447 Jan 24 18:22:21 p34 kernel: [273475.822652] su S C47DBF10 0 16795 15055 16796 (NOTLB)
448 Jan 24 18:22:21 p34 kernel: [273475.822661] c47dbf24 00000082 00000002 c47dbf10 c47dbf0c 00000000 c2aaf560 c2aaf560
449 Jan 24 18:22:21 p34 kernel: [273475.822834] cfccb518 c20ec560 c397f040 00000008 b5fc7675 0000f8a6 00005763 c2aaf66c
450 Jan 24 18:22:21 p34 kernel: [273475.822846] c1fe3280 00000001 c1fe3280 0804e104 c397fa40 c397fa40 c397fa40 c9076000
451 Jan 24 18:22:21 p34 kernel: [273475.822858] Call Trace:
452 Jan 24 18:22:21 p34 kernel: [273475.822861] [<c011a38e>] copy_process+0xd7e/0xfc0
453 Jan 24 18:22:21 p34 kernel: [273475.822866] [<c011d9ec>] do_wait+0x1cc/0xb30
454 Jan 24 18:22:21 p34 kernel: [273475.822870] [<c01258aa>] do_sigaction+0xea/0x1b0
455 Jan 24 18:22:21 p34 kernel: [273475.822875] [<c0116c10>] default_wake_function+0x0/0x10
456 Jan 24 18:22:21 p34 kernel: [273475.822879] [<c011e381>] sys_wait4+0x31/0x40
457 Jan 24 18:22:21 p34 kernel: [273475.822883] [<c011e3b5>] sys_waitpid+0x25/0x30
458 Jan 24 18:22:21 p34 kernel: [273475.822888] [<c0103138>] syscall_call+0x7/0xb
459 Jan 24 18:22:21 p34 kernel: [273475.822892] [<c042007b>] __sched_text_start+0x89b/0x950
460 Jan 24 18:22:21 p34 kernel: [273475.822897] [<c0420033>] __sched_text_start+0x853/0x950
461 Jan 24 18:22:21 p34 kernel: [273475.822901] =======================
462 Jan 24 18:22:21 p34 kernel: [273475.823115] bash S C9077EB4 0 16796 16795 (NOTLB)
463 Jan 24 18:22:21 p34 kernel: [273475.823124] c9077ec8 00000086 00000002 c9077eb4 c9077eb0 00000000 c1fe3280 c4fe1030
464 Jan 24 18:22:21 p34 kernel: [273475.823136] c9077e6c c0114471 00000001 00000003 3d9e642b 0000f8b0 00000d14 c4fe113c
465 Jan 24 18:22:21 p34 kernel: [273475.823147] c1fe3280 00000001 c041facd c9077f14 ca955ac0 00000001 00000001 00000000
466 Jan 24 18:22:21 p34 kernel: [273475.823267] Call Trace:
467 Jan 24 18:22:21 p34 kernel: [273475.823270] [<c0114471>] __activate_task+0x21/0x40
468 Jan 24 18:22:21 p34 kernel: [273475.823275] [<c041facd>] __sched_text_start+0x2ed/0x950
469 Jan 24 18:22:21 p34 kernel: [273475.823280] [<c04209c6>] schedule_timeout+0x76/0xd0
470 Jan 24 18:22:21 p34 kernel: [273475.823285] [<c012ed7d>] add_wait_queue+0x1d/0x50
471 Jan 24 18:22:21 p34 kernel: [273475.823289] [<c02af0de>] read_chan+0x1be/0x620
472 Jan 24 18:22:21 p34 kernel: [273475.823294] [<c0116c10>] default_wake_function+0x0/0x10
473 Jan 24 18:22:21 p34 kernel: [273475.823298] [<c02abc5d>] tty_read+0x8d/0xe0
474 Jan 24 18:22:21 p34 kernel: [273475.823302] [<c02aef20>] read_chan+0x0/0x620
475 Jan 24 18:22:21 p34 kernel: [273475.823306] [<c0159751>] vfs_read+0xa1/0x160
476 Jan 24 18:22:21 p34 kernel: [273475.823310] [<c02abbd0>] tty_read+0x0/0xe0
477 Jan 24 18:22:21 p34 kernel: [273475.823314] [<c0159cc1>] sys_read+0x41/0x70
478 Jan 24 18:22:21 p34 kernel: [273475.823531] [<c0103138>] syscall_call+0x7/0xb
479 Jan 24 18:22:21 p34 kernel: [273475.823535] =======================
480 Jan 24 18:22:21 p34 kernel: [273475.823538] sshd S C4037740 0 16808 2263 16810 16827 15052 (NOTLB)
481 Jan 24 18:22:21 p34 kernel: [273475.823547] c8f01db8 00000086 c8f01dfc c4037740 c4037e40 00000001 00000040 cfe44a90
482 Jan 24 18:22:21 p34 kernel: [273475.823558] 043de32d 0000f8af 00000000 00000008 043ed5a7 0000f8af 000002c9 cfe44b9c
483 Jan 24 18:22:21 p34 kernel: [273475.823570] c1fe3280 00000001 00002138 00000000 c9a9d8c0 f7a2584c c8f01e08 c016a258
484 Jan 24 18:22:21 p34 kernel: [273475.823792] Call Trace:
485 Jan 24 18:22:21 p34 kernel: [273475.823795] [<c016a258>] dput+0x18/0x150
486 Jan 24 18:22:21 p34 kernel: [273475.823799] [<c01600c5>] do_lookup+0x65/0x190
487 Jan 24 18:22:21 p34 kernel: [273475.823804] [<c04209c6>] schedule_timeout+0x76/0xd0
488 Jan 24 18:22:21 p34 kernel: [273475.823808] [<c0161f84>] __link_path_walk+0xb04/0xc90
489 Jan 24 18:22:21 p34 kernel: [273475.823813] [<c012ec70>] prepare_to_wait+0x20/0x70
490 Jan 24 18:22:21 p34 kernel: [273475.823817] [<c0409218>] unix_stream_recvmsg+0x358/0x500
491 Jan 24 18:22:21 p34 kernel: [273475.823822] [<c0162175>] link_path_walk+0x65/0xc0
492 Jan 24 18:22:21 p34 kernel: [273475.823827] [<c012eac0>] autoremove_wake_function+0x0/0x50
493 Jan 24 18:22:21 p34 kernel: [273475.823831] [<c03a1538>] sock_aio_read+0xe8/0x100
494 Jan 24 18:22:21 p34 kernel: [273475.823836] [<c01623c6>] do_path_lookup+0x86/0x1d0
495 Jan 24 18:22:21 p34 kernel: [273475.823841] [<c0158e37>] do_sync_read+0xc7/0x110
496 Jan 24 18:22:21 p34 kernel: [273475.823845] [<c015b7da>] chrdev_open+0x7a/0x150
497 Jan 24 18:22:21 p34 kernel: [273475.823850] [<c012eac0>] autoremove_wake_function+0x0/0x50
498 Jan 24 18:22:21 p34 kernel: [273475.823963] [<c01597ff>] vfs_read+0x14f/0x160
499 Jan 24 18:22:21 p34 kernel: [273475.823968] [<c0159cc1>] sys_read+0x41/0x70
500 Jan 24 18:22:21 p34 kernel: [273475.823972] [<c0103138>] syscall_call+0x7/0xb
501 Jan 24 18:22:21 p34 kernel: [273475.823976] =======================
502 Jan 24 18:22:21 p34 kernel: [273475.823979] sshd S C4061B3C 0 16810 16808 16811 (NOTLB)
503 Jan 24 18:22:21 p34 kernel: [273475.823988] c4061b50 00000086 00000002 c4061b3c c4061b38 00000000 c7a833f0 c7857030
504 Jan 24 18:22:21 p34 kernel: [273475.823999] c933258a 0000f8b2 00000202 0000000a a65d46f3 0000f8b3 00010b80 c785713c
505 Jan 24 18:22:21 p34 kernel: [273475.824220] c1fe3280 00000001 c03ca36b 19bfcc00 c9a9dac0 00000003 c04df568 00000202
506 Jan 24 18:22:21 p34 kernel: [273475.824232] Call Trace:
507 Jan 24 18:22:21 p34 kernel: [273475.824235] [<c03ca36b>] tcp_packet+0x44b/0xb50
508 Jan 24 18:22:21 p34 kernel: [273475.824240] [<c0124bc0>] lock_timer_base+0x20/0x50
509 Jan 24 18:22:21 p34 kernel: [273475.824406] [<c042099b>] schedule_timeout+0x4b/0xd0
510 Jan 24 18:22:21 p34 kernel: [273475.824411] [<c0124330>] process_timeout+0x0/0x10
511 Jan 24 18:22:21 p34 kernel: [273475.824415] [<c016564e>] do_select+0x3be/0x4a0
512 Jan 24 18:22:21 p34 kernel: [273475.824420] [<c0165d20>] __pollwait+0x0/0x100
513 Jan 24 18:22:21 p34 kernel: [273475.824425] [<c0116c10>] default_wake_function+0x0/0x10
514 Jan 24 18:22:21 p34 kernel: [273475.824429] [<c0116c10>] default_wake_function+0x0/0x10
515 Jan 24 18:22:21 p34 kernel: [273475.824433] [<c0116c10>] default_wake_function+0x0/0x10
516 Jan 24 18:22:21 p34 kernel: [273475.824438] [<c0116c10>] default_wake_function+0x0/0x10
517 Jan 24 18:22:21 p34 kernel: [273475.824442] [<c03ae411>] dev_queue_xmit+0xb1/0x320
518 Jan 24 18:22:21 p34 kernel: [273475.824446] [<c03d8147>] ip_output+0x137/0x270
519 Jan 24 18:22:21 p34 kernel: [273475.824451] [<c03d6b50>] ip_finish_output+0x0/0x1d0
520 Jan 24 18:22:21 p34 kernel: [273475.824455] [<c03d5210>] dst_output+0x0/0x10
521 Jan 24 18:22:21 p34 kernel: [273475.824459] [<c03d767f>] ip_queue_xmit+0x1bf/0x480
522 Jan 24 18:22:21 p34 kernel: [273475.824464] [<c03d5210>] dst_output+0x0/0x10
523 Jan 24 18:22:21 p34 kernel: [273475.824468] [<c03e7658>] tcp_transmit_skb+0x3a8/0x710
524 Jan 24 18:22:21 p34 kernel: [273475.824473] [<c03a3ef3>] release_sock+0x13/0xc0
525 Jan 24 18:22:21 p34 kernel: [273475.824477] [<c0124bc0>] lock_timer_base+0x20/0x50
526 Jan 24 18:22:21 p34 kernel: [273475.824481] [<c0124cf8>] __mod_timer+0x98/0xb0
527 Jan 24 18:22:21 p34 kernel: [273475.824486] [<c03a408c>] sk_reset_timer+0xc/0x20
528 Jan 24 18:22:21 p34 kernel: [273475.824652] [<c03e8f57>] __tcp_push_pending_frames+0x127/0x8a0
529 Jan 24 18:22:21 p34 kernel: [273475.824657] [<c03a3ef3>] release_sock+0x13/0xc0
530 Jan 24 18:22:21 p34 kernel: [273475.824661] [<c03de01f>] tcp_sendmsg+0x77f/0xb30
531 Jan 24 18:22:21 p34 kernel: [273475.824666] [<c02aff97>] pty_write+0x47/0x60
532 Jan 24 18:22:21 p34 kernel: [273475.824670] [<c0165908>] core_sys_select+0x1d8/0x2f0
533 Jan 24 18:22:21 p34 kernel: [273475.824675] [<c03a142a>] sock_aio_write+0xea/0x110
534 Jan 24 18:22:21 p34 kernel: [273475.824679] [<c02a98a4>] tty_ldisc_deref+0x44/0x70
535 Jan 24 18:22:21 p34 kernel: [273475.824684] [<c0158d27>] do_sync_write+0xc7/0x110
536 Jan 24 18:22:21 p34 kernel: [273475.824689] [<c02a9943>] tty_wakeup+0x33/0x70
537 Jan 24 18:22:21 p34 kernel: [273475.824693] [<c012eac0>] autoremove_wake_function+0x0/0x50
538 Jan 24 18:22:21 p34 kernel: [273475.824698] [<c0166172>] sys_select+0xe2/0x1a0
539 Jan 24 18:22:21 p34 kernel: [273475.824703] [<c0103138>] syscall_call+0x7/0xb
540 Jan 24 18:22:21 p34 kernel: [273475.824707] [<c042007b>] __sched_text_start+0x89b/0x950
541 Jan 24 18:22:21 p34 kernel: [273475.824712] =======================
542 Jan 24 18:22:21 p34 kernel: [273475.824714] bash S C041FAFB 0 16811 16810 16820 (NOTLB)
543 Jan 24 18:22:21 p34 kernel: [273475.824723] c3205f24 00000082 f7eca000 c041fafb c7a83000 c7a83000 c04d442c c9d20030
544 Jan 24 18:22:21 p34 kernel: [273475.824895] c04d4428 00000010 c1c57320 00000004 1f844bd7 0000f8af 000123b4 c9d2013c
545 Jan 24 18:22:21 p34 kernel: [273475.824907] c1fdf080 00000000 fffb9000 080fb500 cdf00740 cdf00740 c1958f20 c3205f28
546 Jan 24 18:22:21 p34 kernel: [273475.824919] Call Trace:
547 Jan 24 18:22:21 p34 kernel: [273475.824922] [<c041fafb>] __sched_text_start+0x31b/0x950
548 Jan 24 18:22:21 p34 kernel: [273475.824927] [<c012eadb>] autoremove_wake_function+0x1b/0x50
549 Jan 24 18:22:21 p34 kernel: [273475.824931] [<c0148071>] __handle_mm_fault+0x651/0x900
550 Jan 24 18:22:21 p34 kernel: [273475.825149] [<c011d9ec>] do_wait+0x1cc/0xb30
551 Jan 24 18:22:21 p34 kernel: [273475.825153] [<c01147b8>] __wake_up+0x38/0x50
552 Jan 24 18:22:21 p34 kernel: [273475.825157] [<c015ecb6>] pipe_release+0x86/0xb0
553 Jan 24 18:22:21 p34 kernel: [273475.825161] [<c016453f>] do_ioctl+0x7f/0x90
554 Jan 24 18:22:21 p34 kernel: [273475.825166] [<c0116c10>] default_wake_function+0x0/0x10
555 Jan 24 18:22:21 p34 kernel: [273475.825170] [<c011e381>] sys_wait4+0x31/0x40
556 Jan 24 18:22:21 p34 kernel: [273475.825174] [<c011e3b5>] sys_waitpid+0x25/0x30
557 Jan 24 18:22:21 p34 kernel: [273475.825178] [<c0103138>] syscall_call+0x7/0xb
558 Jan 24 18:22:21 p34 kernel: [273475.825183] [<c0420033>] __sched_text_start+0x853/0x950
559 Jan 24 18:22:21 p34 kernel: [273475.825187] =======================
560 Jan 24 18:22:21 p34 kernel: [273475.825190] su S CAEB9F10 0 16820 16811 16821 (NOTLB)
561 Jan 24 18:22:21 p34 kernel: [273475.825199] caeb9f24 00000082 00000002 caeb9f10 caeb9f0c 00000000 c63e2030 c63e2030
562 Jan 24 18:22:21 p34 kernel: [273475.825210] f754a95c c6cdc560 cdf00b40 00000006 51c7e69f 0000f8af 00007044 c63e213c
563 Jan 24 18:22:21 p34 kernel: [273475.825382] c1fe3280 00000001 c1fe3280 0804e104 cdf00d40 cdf00d40 cdf00d40 c7bea000
564 Jan 24 18:22:21 p34 kernel: [273475.825394] Call Trace:
565 Jan 24 18:22:21 p34 kernel: [273475.825397] [<c011a38e>] copy_process+0xd7e/0xfc0
566 Jan 24 18:22:21 p34 kernel: [273475.825401] [<c011d9ec>] do_wait+0x1cc/0xb30
567 Jan 24 18:22:21 p34 kernel: [273475.825406] [<c01258aa>] do_sigaction+0xea/0x1b0
568 Jan 24 18:22:21 p34 kernel: [273475.825410] [<c0116c10>] default_wake_function+0x0/0x10
569 Jan 24 18:22:21 p34 kernel: [273475.825415] [<c011e381>] sys_wait4+0x31/0x40
570 Jan 24 18:22:21 p34 kernel: [273475.825419] [<c011e3b5>] sys_waitpid+0x25/0x30
571 Jan 24 18:22:21 p34 kernel: [273475.825423] [<c0103138>] syscall_call+0x7/0xb
572 Jan 24 18:22:21 p34 kernel: [273475.825427] [<c0420033>] __sched_text_start+0x853/0x950
573 Jan 24 18:22:21 p34 kernel: [273475.825642] =======================
574 Jan 24 18:22:21 p34 kernel: [273475.825645] bash D C7BEBAAC 0 16821 16820 (NOTLB)
575 Jan 24 18:22:21 p34 kernel: [273475.825653] c7bebac0 00000082 00000002 c7bebaac c7bebaa8 00000000 5b48e428 c6cdc560
576 Jan 24 18:22:21 p34 kernel: [273475.825665] c7bebad8 00010b03 00000011 00000009 cb093a53 0000f8b2 00017216 c6cdc66c
577 Jan 24 18:22:21 p34 kernel: [273475.825838] c1fe3280 00000001 c20c70c0 c3272058 f75c4a80 c7bebad8 c016a258 f7b12520
578 Jan 24 18:22:21 p34 kernel: [273475.825850] Call Trace:
579 Jan 24 18:22:21 p34 kernel: [273475.825853] [<c016a258>] dput+0x18/0x150
580 Jan 24 18:22:21 p34 kernel: [273475.825857] [<c0161f84>] __link_path_walk+0xb04/0xc90
581 Jan 24 18:22:21 p34 kernel: [273475.825862] [<c03600ad>] md_write_start+0x8d/0x120
582 Jan 24 18:22:21 p34 kernel: [273475.825867] [<c012eac0>] autoremove_wake_function+0x0/0x50
583 Jan 24 18:22:21 p34 kernel: [273475.825871] [<c03557a8>] make_request+0x38/0x560
584 Jan 24 18:22:21 p34 kernel: [273475.825876] [<c02409ce>] xfs_log_move_tail+0x3e/0x1b0
585 Jan 24 18:22:21 p34 kernel: [273475.825881] [<c023c9fa>] xfs_iomap+0x2ca/0x720
586 Jan 24 18:22:21 p34 kernel: [273475.825885] [<c026d77a>] generic_make_request+0xda/0x150
587 Jan 24 18:22:21 p34 kernel: [273475.825890] [<c026fe32>] submit_bio+0x72/0x110
588 Jan 24 18:22:21 p34 kernel: [273475.825895] [<c013da6b>] mempool_alloc+0x2b/0xf0
589 Jan 24 18:22:21 p34 kernel: [273475.825899] [<c034f1a0>] raid5_mergeable_bvec+0x0/0x90
590 Jan 24 18:22:21 p34 kernel: [273475.825904] [<c017c052>] __bio_add_page+0x102/0x190
591 Jan 24 18:22:21 p34 kernel: [273475.825909] [<c017c117>] bio_add_page+0x37/0x50
592 Jan 24 18:22:21 p34 kernel: [273475.826073] [<c025be8b>] xfs_submit_ioend_bio+0x1b/0x30
593 Jan 24 18:22:21 p34 kernel: [273475.826078] [<c025c10e>] xfs_page_state_convert+0x26e/0xff0
594 Jan 24 18:22:21 p34 kernel: [273475.826082] [<c0155509>] slab_destroy+0x59/0x90
595 Jan 24 18:22:21 p34 kernel: [273475.826088] [<c025d102>] xfs_vm_writepage+0x62/0x100
596 Jan 24 18:22:21 p34 kernel: [273475.826092] [<c014396d>] shrink_inactive_list+0x5dd/0x8a0
597 Jan 24 18:22:21 p34 kernel: [273475.826097] [<c0143cd1>] shrink_zone+0xa1/0x100
598 Jan 24 18:22:21 p34 kernel: [273475.826102] [<c01447e0>] try_to_free_pages+0x140/0x260
599 Jan 24 18:22:21 p34 kernel: [273475.826106] [<c013fb4f>] __alloc_pages+0x13f/0x2f0
600 Jan 24 18:22:21 p34 kernel: [273475.826111] [<c0350dd3>] grow_one_stripe+0x93/0x100
601 Jan 24 18:22:21 p34 kernel: [273475.826115] [<c0350ee6>] raid5_store_stripe_cache_size+0xa6/0xc0
602 Jan 24 18:22:21 p34 kernel: [273475.826120] [<c0361a83>] md_attr_store+0x73/0x90
603 Jan 24 18:22:21 p34 kernel: [273475.826125] [<c0192302>] sysfs_write_file+0xa2/0x100
604 Jan 24 18:22:21 p34 kernel: [273475.826129] [<c01595f6>] vfs_write+0xa6/0x160
605 Jan 24 18:22:21 p34 kernel: [273475.826134] [<c0192260>] sysfs_write_file+0x0/0x100
606 Jan 24 18:22:21 p34 kernel: [273475.826138] [<c0159d31>] sys_write+0x41/0x70
607 Jan 24 18:22:21 p34 kernel: [273475.826303] [<c0103138>] syscall_call+0x7/0xb
608 Jan 24 18:22:21 p34 kernel: [273475.826307] =======================
609 Jan 24 18:22:21 p34 kernel: [273475.826309] sshd S CC49DDA4 0 16827 2263 16829 16808 (NOTLB)
610 Jan 24 18:22:21 p34 kernel: [273475.826318] cc49ddb8 00000086 00000002 cc49dda4 cc49dda0 00000000 00000040 c2191560
611 Jan 24 18:22:21 p34 kernel: [273475.826330] 00000000 00000000 00000000 00000004 f2843ff8 0000f8b8 00000517 c219166c
612 Jan 24 18:22:21 p34 kernel: [273475.826498] c1fe3280 00000001 01b42f73 f7bd1e84 cb1f8900 f7a2584c cc49de08 c016a258
613 Jan 24 18:22:21 p34 kernel: [273475.826510] Call Trace:
614 Jan 24 18:22:21 p34 kernel: [273475.826513] [<c016a258>] dput+0x18/0x150
615 Jan 24 18:22:21 p34 kernel: [273475.826574] [<c04209c6>] schedule_timeout+0x76/0xd0
616 Jan 24 18:22:21 p34 kernel: [273475.826578] [<c0161f84>] __link_path_walk+0xb04/0xc90
617 Jan 24 18:22:21 p34 kernel: [273475.826583] [<c012ec70>] prepare_to_wait+0x20/0x70
618 Jan 24 18:22:21 p34 kernel: [273475.826587] [<c0409218>] unix_stream_recvmsg+0x358/0x500
619 Jan 24 18:22:21 p34 kernel: [273475.826593] [<c0162175>] link_path_walk+0x65/0xc0
620 Jan 24 18:22:21 p34 kernel: [273475.826597] [<c012eac0>] autoremove_wake_function+0x0/0x50
621 Jan 24 18:22:21 p34 kernel: [273475.826602] [<c03a1538>] sock_aio_read+0xe8/0x100
622 Jan 24 18:22:21 p34 kernel: [273475.826606] [<c01623c6>] do_path_lookup+0x86/0x1d0
623 Jan 24 18:22:21 p34 kernel: [273475.826611] [<c0158e37>] do_sync_read+0xc7/0x110
624 Jan 24 18:22:21 p34 kernel: [273475.826616] [<c015b7da>] chrdev_open+0x7a/0x150
625 Jan 24 18:22:21 p34 kernel: [273475.826620] [<c012eac0>] autoremove_wake_function+0x0/0x50
626 Jan 24 18:22:21 p34 kernel: [273475.826783] [<c01597ff>] vfs_read+0x14f/0x160
627 Jan 24 18:22:21 p34 kernel: [273475.826788] [<c0159cc1>] sys_read+0x41/0x70
628 Jan 24 18:22:21 p34 kernel: [273475.826792] [<c0103138>] syscall_call+0x7/0xb
629 Jan 24 18:22:21 p34 kernel: [273475.826796] =======================
630 Jan 24 18:22:21 p34 kernel: [273475.826798] sshd S C0138245 0 16829 16827 16830 (NOTLB)
631 Jan 24 18:22:21 p34 kernel: [273475.826807] c6969b50 00000086 c0106dcb c0138245 c0507e00 00000000 00000000 c4d24560
632 Jan 24 18:22:21 p34 kernel: [273475.826819] 846bb04b 0000f8b9 00000202 0000000a 846bf4c2 0000f8b9 00008838 c4d2466c
633 Jan 24 18:22:21 p34 kernel: [273475.826991] c1fe3280 00000001 00001ce8 00000000 caecf8c0 00000003 c04df568 00000202
634 Jan 24 18:22:21 p34 kernel: [273475.827003] Call Trace:
635 Jan 24 18:22:21 p34 kernel: [273475.827006] [<c0106dcb>] timer_interrupt+0x4b/0x80
636 Jan 24 18:22:21 p34 kernel: [273475.827010] [<c0138245>] handle_IRQ_event+0x25/0x60
637 Jan 24 18:22:21 p34 kernel: [273475.827015] [<c0124bc0>] lock_timer_base+0x20/0x50
638 Jan 24 18:22:21 p34 kernel: [273475.827019] [<c0124cf8>] __mod_timer+0x98/0xb0
639 Jan 24 18:22:21 p34 kernel: [273475.827023] [<c042099b>] schedule_timeout+0x4b/0xd0
640 Jan 24 18:22:21 p34 kernel: [273475.827028] [<c0124330>] process_timeout+0x0/0x10
641 Jan 24 18:22:21 p34 kernel: [273475.827032] [<c016564e>] do_select+0x3be/0x4a0
642 Jan 24 18:22:21 p34 kernel: [273475.827037] [<c0165d20>] __pollwait+0x0/0x100
643 Jan 24 18:22:21 p34 kernel: [273475.827042] [<c0116c10>] default_wake_function+0x0/0x10
644 Jan 24 18:22:21 p34 kernel: [273475.827046] [<c0116c10>] default_wake_function+0x0/0x10
645 Jan 24 18:22:21 p34 kernel: [273475.827209] [<c0116c10>] default_wake_function+0x0/0x10
646 Jan 24 18:22:21 p34 kernel: [273475.827213] [<c0116c10>] default_wake_function+0x0/0x10
647 Jan 24 18:22:21 p34 kernel: [273475.827217] [<c03ae411>] dev_queue_xmit+0xb1/0x320
648 Jan 24 18:22:21 p34 kernel: [273475.827222] [<c03d8147>] ip_output+0x137/0x270
649 Jan 24 18:22:21 p34 kernel: [273475.827226] [<c03d6b50>] ip_finish_output+0x0/0x1d0
650 Jan 24 18:22:21 p34 kernel: [273475.827231] [<c03d5210>] dst_output+0x0/0x10
651 Jan 24 18:22:21 p34 kernel: [273475.827235] [<c03d767f>] ip_queue_xmit+0x1bf/0x480
652 Jan 24 18:22:21 p34 kernel: [273475.827240] [<c03d5210>] dst_output+0x0/0x10
653 Jan 24 18:22:21 p34 kernel: [273475.827244] [<c03e2516>] tcp_ack+0xd96/0x1c30
654 Jan 24 18:22:21 p34 kernel: [273475.827249] [<c03e7658>] tcp_transmit_skb+0x3a8/0x710
655 Jan 24 18:22:21 p34 kernel: [273475.827253] [<c0124bc0>] lock_timer_base+0x20/0x50
656 Jan 24 18:22:21 p34 kernel: [273475.827257] [<c0124cf8>] __mod_timer+0x98/0xb0
657 Jan 24 18:22:21 p34 kernel: [273475.827261] [<c03a408c>] sk_reset_timer+0xc/0x20
658 Jan 24 18:22:21 p34 kernel: [273475.827476] [<c03e8f57>] __tcp_push_pending_frames+0x127/0x8a0
659 Jan 24 18:22:21 p34 kernel: [273475.827481] [<c03a3ef3>] release_sock+0x13/0xc0
660 Jan 24 18:22:21 p34 kernel: [273475.827486] [<c03de01f>] tcp_sendmsg+0x77f/0xb30
661 Jan 24 18:22:21 p34 kernel: [273475.827490] [<c0165908>] core_sys_select+0x1d8/0x2f0
662 Jan 24 18:22:21 p34 kernel: [273475.827495] [<c03a142a>] sock_aio_write+0xea/0x110
663 Jan 24 18:22:21 p34 kernel: [273475.827500] [<c0114229>] __wake_up_common+0x39/0x70
664 Jan 24 18:22:21 p34 kernel: [273475.827505] [<c0158d27>] do_sync_write+0xc7/0x110
665 Jan 24 18:22:21 p34 kernel: [273475.827510] [<c02a9943>] tty_wakeup+0x33/0x70
666 Jan 24 18:22:21 p34 kernel: [273475.827515] [<c012eac0>] autoremove_wake_function+0x0/0x50
667 Jan 24 18:22:21 p34 kernel: [273475.827520] [<c0166172>] sys_select+0xe2/0x1a0
668 Jan 24 18:22:21 p34 kernel: [273475.827524] [<c0103138>] syscall_call+0x7/0xb
669 Jan 24 18:22:21 p34 kernel: [273475.827528] =======================
670 Jan 24 18:22:21 p34 kernel: [273475.827531] bash S C041FAFB 0 16830 16829 16839 (NOTLB)
671 Jan 24 18:22:21 p34 kernel: [273475.827595] c9551f24 00000082 f7eca000 c041fafb f6187000 f6187000 c04d442c c4d24030
672 Jan 24 18:22:21 p34 kernel: [273475.827607] c04d4428 00000010 c1a140e0 00000003 106229a4 0000f8b9 00010870 c4d2413c
673 Jan 24 18:22:21 p34 kernel: [273475.827777] c1fdf080 00000000 fffb9000 080fb500 cdf00b40 cdf00b40 c18f3ea0 c9551f28
674 Jan 24 18:22:21 p34 kernel: [273475.827789] Call Trace:
675 Jan 24 18:22:21 p34 kernel: [273475.827792] [<c041fafb>] __sched_text_start+0x31b/0x950
676 Jan 24 18:22:21 p34 kernel: [273475.827797] [<c012eadb>] autoremove_wake_function+0x1b/0x50
677 Jan 24 18:22:21 p34 kernel: [273475.827802] [<c0148071>] __handle_mm_fault+0x651/0x900
678 Jan 24 18:22:21 p34 kernel: [273475.827807] [<c011d9ec>] do_wait+0x1cc/0xb30
679 Jan 24 18:22:21 p34 kernel: [273475.827811] [<c01147b8>] __wake_up+0x38/0x50
680 Jan 24 18:22:21 p34 kernel: [273475.827815] [<c015ecb6>] pipe_release+0x86/0xb0
681 Jan 24 18:22:21 p34 kernel: [273475.827980] [<c016453f>] do_ioctl+0x7f/0x90
682 Jan 24 18:22:21 p34 kernel: [273475.827984] [<c0116c10>] default_wake_function+0x0/0x10
683 Jan 24 18:22:21 p34 kernel: [273475.827989] [<c011e381>] sys_wait4+0x31/0x40
684 Jan 24 18:22:21 p34 kernel: [273475.827993] [<c011e3b5>] sys_waitpid+0x25/0x30
685 Jan 24 18:22:21 p34 kernel: [273475.827997] [<c0103138>] syscall_call+0x7/0xb
686 Jan 24 18:22:21 p34 kernel: [273475.828001] =======================
687 Jan 24 18:22:21 p34 kernel: [273475.828004] su S CFFA7F10 0 16839 16830 16840 (NOTLB)
688 Jan 24 18:22:21 p34 kernel: [273475.828013] cffa7f24 00000082 00000002 cffa7f10 cffa7f0c 00000000 c56b6560 c56b6560
689 Jan 24 18:22:21 p34 kernel: [273475.828024] f6277d4c f769a560 f76a47c0 00000006 42820811 0000f8b9 00006837 c56b666c
690 Jan 24 18:22:21 p34 kernel: [273475.828247] c1fdf080 00000000 c1fe3280 0804e104 cfde4580 cfde4580 cfde4580 c9710000
691 Jan 24 18:22:21 p34 kernel: [273475.828259] Call Trace:
692 Jan 24 18:22:21 p34 kernel: [273475.828262] [<c011a38e>] copy_process+0xd7e/0xfc0
693 Jan 24 18:22:21 p34 kernel: [273475.828266] [<c011d9ec>] do_wait+0x1cc/0xb30
694 Jan 24 18:22:21 p34 kernel: [273475.828271] [<c01258aa>] do_sigaction+0xea/0x1b0
695 Jan 24 18:22:21 p34 kernel: [273475.828275] [<c0116c10>] default_wake_function+0x0/0x10
696 Jan 24 18:22:21 p34 kernel: [273475.828280] [<c011e381>] sys_wait4+0x31/0x40
697 Jan 24 18:22:21 p34 kernel: [273475.828284] [<c011e3b5>] sys_waitpid+0x25/0x30
698 Jan 24 18:22:21 p34 kernel: [273475.828288] [<c0103138>] syscall_call+0x7/0xb
699 Jan 24 18:22:21 p34 kernel: [273475.828293] [<c0420033>] __sched_text_start+0x853/0x950
700 Jan 24 18:22:21 p34 kernel: [273475.828406] =======================
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
2007-01-24 23:24 ` Justin Piszcz
@ 2007-01-25 0:13 ` Neil Brown
2007-01-25 0:16 ` Justin Piszcz
0 siblings, 1 reply; 23+ messages in thread
From: Neil Brown @ 2007-01-25 0:13 UTC (permalink / raw)
To: Justin Piszcz; +Cc: kyle, linux-raid
On Wednesday January 24, jpiszcz@lucidpixels.com wrote:
> Here you go Neil:
>
> p34:~# echo 512 > /sys/block/md3/md/stripe_cache_size
> p34:~# echo 1024 > /sys/block/md3/md/stripe_cache_size
> p34:~# echo 2048 > /sys/block/md3/md/stripe_cache_size
> p34:~# echo 4096 > /sys/block/md3/md/stripe_cache_size
> p34:~# echo 8192 > /sys/block/md3/md/stripe_cache_size
> <...... FROZEN ........>
>
> I ran echo t > /proc/sysrq-trigger and then copied the relevant parts of
> kern.log and I am attaching them to this e-mail.
>
> Please confirm this is what you needed.
Perfect. Thanks.
This bit:
574 Jan 24 18:22:21 p34 kernel: [273475.825645] bash D C7BEBAAC 0 16821 16820 (NOTLB)
575 Jan 24 18:22:21 p34 kernel: [273475.825653] c7bebac0 00000082 00000002 c7bebaac c7bebaa8 00000000 5b48e428 c6cdc560
576 Jan 24 18:22:21 p34 kernel: [273475.825665] c7bebad8 00010b03 00000011 00000009 cb093a53 0000f8b2 00017216 c6cdc66c
577 Jan 24 18:22:21 p34 kernel: [273475.825838] c1fe3280 00000001 c20c70c0 c3272058 f75c4a80 c7bebad8 c016a258 f7b12520
578 Jan 24 18:22:21 p34 kernel: [273475.825850] Call Trace:
579 Jan 24 18:22:21 p34 kernel: [273475.825853] [<c016a258>] dput+0x18/0x150
580 Jan 24 18:22:21 p34 kernel: [273475.825857] [<c0161f84>] __link_path_walk+0xb04/0xc90
581 Jan 24 18:22:21 p34 kernel: [273475.825862] [<c03600ad>] md_write_start+0x8d/0x120
582 Jan 24 18:22:21 p34 kernel: [273475.825867] [<c012eac0>] autoremove_wake_function+0x0/0x50
583 Jan 24 18:22:21 p34 kernel: [273475.825871] [<c03557a8>] make_request+0x38/0x560
584 Jan 24 18:22:21 p34 kernel: [273475.825876] [<c02409ce>] xfs_log_move_tail+0x3e/0x1b0
585 Jan 24 18:22:21 p34 kernel: [273475.825881] [<c023c9fa>] xfs_iomap+0x2ca/0x720
586 Jan 24 18:22:21 p34 kernel: [273475.825885] [<c026d77a>] generic_make_request+0xda/0x150
587 Jan 24 18:22:21 p34 kernel: [273475.825890] [<c026fe32>] submit_bio+0x72/0x110
588 Jan 24 18:22:21 p34 kernel: [273475.825895] [<c013da6b>] mempool_alloc+0x2b/0xf0
589 Jan 24 18:22:21 p34 kernel: [273475.825899] [<c034f1a0>] raid5_mergeable_bvec+0x0/0x90
590 Jan 24 18:22:21 p34 kernel: [273475.825904] [<c017c052>] __bio_add_page+0x102/0x190
591 Jan 24 18:22:21 p34 kernel: [273475.825909] [<c017c117>] bio_add_page+0x37/0x50
592 Jan 24 18:22:21 p34 kernel: [273475.826073] [<c025be8b>] xfs_submit_ioend_bio+0x1b/0x30
593 Jan 24 18:22:21 p34 kernel: [273475.826078] [<c025c10e>] xfs_page_state_convert+0x26e/0xff0
594 Jan 24 18:22:21 p34 kernel: [273475.826082] [<c0155509>] slab_destroy+0x59/0x90
595 Jan 24 18:22:21 p34 kernel: [273475.826088] [<c025d102>] xfs_vm_writepage+0x62/0x100
596 Jan 24 18:22:21 p34 kernel: [273475.826092] [<c014396d>] shrink_inactive_list+0x5dd/0x8a0
597 Jan 24 18:22:21 p34 kernel: [273475.826097] [<c0143cd1>] shrink_zone+0xa1/0x100
598 Jan 24 18:22:21 p34 kernel: [273475.826102] [<c01447e0>] try_to_free_pages+0x140/0x260
599 Jan 24 18:22:21 p34 kernel: [273475.826106] [<c013fb4f>] __alloc_pages+0x13f/0x2f0
600 Jan 24 18:22:21 p34 kernel: [273475.826111] [<c0350dd3>] grow_one_stripe+0x93/0x100
601 Jan 24 18:22:21 p34 kernel: [273475.826115] [<c0350ee6>] raid5_store_stripe_cache_size+0xa6/0xc0
602 Jan 24 18:22:21 p34 kernel: [273475.826120] [<c0361a83>] md_attr_store+0x73/0x90
603 Jan 24 18:22:21 p34 kernel: [273475.826125] [<c0192302>] sysfs_write_file+0xa2/0x100
604 Jan 24 18:22:21 p34 kernel: [273475.826129] [<c01595f6>] vfs_write+0xa6/0x160
605 Jan 24 18:22:21 p34 kernel: [273475.826134] [<c0192260>] sysfs_write_file+0x0/0x100
606 Jan 24 18:22:21 p34 kernel: [273475.826138] [<c0159d31>] sys_write+0x41/0x70
607 Jan 24 18:22:21 p34 kernel: [273475.826303] [<c0103138>] syscall_call+0x7/0xb
608 Jan 24 18:22:21 p34 kernel: [273475.826307] =======================
Tells me what is happening.
We try to allocate memory to increase the stripe cache (__alloc_pages)
which requires memory to be freed, so shrink_zone gets called which
calls into the 'xfs' filesystem which eventually trying to write to
the raid5 array. The raid5 array is currently 'clean' so we need to
mark the superblock as dirty first (md_write_start), but that needs a
lock that is being held while we grow the stripe cache. Deadlock.
So the patch I posted (changing GFP_KERNEL to GFP_NOIO) will avoid
this as it will then fail the allocation rather than initiate IO.
However it might be better if I can find a way to avoid the
deadlock....
I'll see what I can come up with.
Thanks,
NeilBrown
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
2007-01-25 0:13 ` Neil Brown
@ 2007-01-25 0:16 ` Justin Piszcz
2007-01-25 2:29 ` Neil Brown
0 siblings, 1 reply; 23+ messages in thread
From: Justin Piszcz @ 2007-01-25 0:16 UTC (permalink / raw)
To: Neil Brown; +Cc: kyle, linux-raid
On Thu, 25 Jan 2007, Neil Brown wrote:
> On Wednesday January 24, jpiszcz@lucidpixels.com wrote:
> > Here you go Neil:
> >
> > p34:~# echo 512 > /sys/block/md3/md/stripe_cache_size
> > p34:~# echo 1024 > /sys/block/md3/md/stripe_cache_size
> > p34:~# echo 2048 > /sys/block/md3/md/stripe_cache_size
> > p34:~# echo 4096 > /sys/block/md3/md/stripe_cache_size
> > p34:~# echo 8192 > /sys/block/md3/md/stripe_cache_size
> > <...... FROZEN ........>
> >
> > I ran echo t > /proc/sysrq-trigger and then copied the relevant parts of
> > kern.log and I am attaching them to this e-mail.
> >
> > Please confirm this is what you needed.
>
> Perfect. Thanks.
>
> This bit:
>
> 574 Jan 24 18:22:21 p34 kernel: [273475.825645] bash D C7BEBAAC 0 16821 16820 (NOTLB)
> 575 Jan 24 18:22:21 p34 kernel: [273475.825653] c7bebac0 00000082 00000002 c7bebaac c7bebaa8 00000000 5b48e428 c6cdc560
> 576 Jan 24 18:22:21 p34 kernel: [273475.825665] c7bebad8 00010b03 00000011 00000009 cb093a53 0000f8b2 00017216 c6cdc66c
> 577 Jan 24 18:22:21 p34 kernel: [273475.825838] c1fe3280 00000001 c20c70c0 c3272058 f75c4a80 c7bebad8 c016a258 f7b12520
> 578 Jan 24 18:22:21 p34 kernel: [273475.825850] Call Trace:
> 579 Jan 24 18:22:21 p34 kernel: [273475.825853] [<c016a258>] dput+0x18/0x150
> 580 Jan 24 18:22:21 p34 kernel: [273475.825857] [<c0161f84>] __link_path_walk+0xb04/0xc90
> 581 Jan 24 18:22:21 p34 kernel: [273475.825862] [<c03600ad>] md_write_start+0x8d/0x120
> 582 Jan 24 18:22:21 p34 kernel: [273475.825867] [<c012eac0>] autoremove_wake_function+0x0/0x50
> 583 Jan 24 18:22:21 p34 kernel: [273475.825871] [<c03557a8>] make_request+0x38/0x560
> 584 Jan 24 18:22:21 p34 kernel: [273475.825876] [<c02409ce>] xfs_log_move_tail+0x3e/0x1b0
> 585 Jan 24 18:22:21 p34 kernel: [273475.825881] [<c023c9fa>] xfs_iomap+0x2ca/0x720
> 586 Jan 24 18:22:21 p34 kernel: [273475.825885] [<c026d77a>] generic_make_request+0xda/0x150
> 587 Jan 24 18:22:21 p34 kernel: [273475.825890] [<c026fe32>] submit_bio+0x72/0x110
> 588 Jan 24 18:22:21 p34 kernel: [273475.825895] [<c013da6b>] mempool_alloc+0x2b/0xf0
> 589 Jan 24 18:22:21 p34 kernel: [273475.825899] [<c034f1a0>] raid5_mergeable_bvec+0x0/0x90
> 590 Jan 24 18:22:21 p34 kernel: [273475.825904] [<c017c052>] __bio_add_page+0x102/0x190
> 591 Jan 24 18:22:21 p34 kernel: [273475.825909] [<c017c117>] bio_add_page+0x37/0x50
> 592 Jan 24 18:22:21 p34 kernel: [273475.826073] [<c025be8b>] xfs_submit_ioend_bio+0x1b/0x30
> 593 Jan 24 18:22:21 p34 kernel: [273475.826078] [<c025c10e>] xfs_page_state_convert+0x26e/0xff0
> 594 Jan 24 18:22:21 p34 kernel: [273475.826082] [<c0155509>] slab_destroy+0x59/0x90
> 595 Jan 24 18:22:21 p34 kernel: [273475.826088] [<c025d102>] xfs_vm_writepage+0x62/0x100
> 596 Jan 24 18:22:21 p34 kernel: [273475.826092] [<c014396d>] shrink_inactive_list+0x5dd/0x8a0
> 597 Jan 24 18:22:21 p34 kernel: [273475.826097] [<c0143cd1>] shrink_zone+0xa1/0x100
> 598 Jan 24 18:22:21 p34 kernel: [273475.826102] [<c01447e0>] try_to_free_pages+0x140/0x260
> 599 Jan 24 18:22:21 p34 kernel: [273475.826106] [<c013fb4f>] __alloc_pages+0x13f/0x2f0
> 600 Jan 24 18:22:21 p34 kernel: [273475.826111] [<c0350dd3>] grow_one_stripe+0x93/0x100
> 601 Jan 24 18:22:21 p34 kernel: [273475.826115] [<c0350ee6>] raid5_store_stripe_cache_size+0xa6/0xc0
> 602 Jan 24 18:22:21 p34 kernel: [273475.826120] [<c0361a83>] md_attr_store+0x73/0x90
> 603 Jan 24 18:22:21 p34 kernel: [273475.826125] [<c0192302>] sysfs_write_file+0xa2/0x100
> 604 Jan 24 18:22:21 p34 kernel: [273475.826129] [<c01595f6>] vfs_write+0xa6/0x160
> 605 Jan 24 18:22:21 p34 kernel: [273475.826134] [<c0192260>] sysfs_write_file+0x0/0x100
> 606 Jan 24 18:22:21 p34 kernel: [273475.826138] [<c0159d31>] sys_write+0x41/0x70
> 607 Jan 24 18:22:21 p34 kernel: [273475.826303] [<c0103138>] syscall_call+0x7/0xb
> 608 Jan 24 18:22:21 p34 kernel: [273475.826307] =======================
>
> Tells me what is happening.
> We try to allocate memory to increase the stripe cache (__alloc_pages)
> which requires memory to be freed, so shrink_zone gets called which
> calls into the 'xfs' filesystem which eventually trying to write to
> the raid5 array. The raid5 array is currently 'clean' so we need to
> mark the superblock as dirty first (md_write_start), but that needs a
> lock that is being held while we grow the stripe cache. Deadlock.
>
> So the patch I posted (changing GFP_KERNEL to GFP_NOIO) will avoid
> this as it will then fail the allocation rather than initiate IO.
> However it might be better if I can find a way to avoid the
> deadlock....
>
> I'll see what I can come up with.
>
> Thanks,
> NeilBrown
>
Okay-- thanks for the explanation and I will await a future patch..
Justin.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: change strip_cache_size freeze the whole raid
2007-01-25 0:16 ` Justin Piszcz
@ 2007-01-25 2:29 ` Neil Brown
0 siblings, 0 replies; 23+ messages in thread
From: Neil Brown @ 2007-01-25 2:29 UTC (permalink / raw)
To: Justin Piszcz; +Cc: kyle, linux-raid
On Wednesday January 24, jpiszcz@lucidpixels.com wrote:
>
> Okay-- thanks for the explanation and I will await a future patch..
>
This would be that patch. It doesn't seem to break anything, but I
haven't reproduced he bug yet (I think I need to reduce the amount of
memory I have available) so I haven't demonstrated that this fixes it.
Thanks,
NeilBrown
---------------------
Fix potential memalloc deadlock in md
If a GFP_KERNEL allocation is attempted in md while the mddev_lock is
held, it is possible for a deadlock to eventuate.
This happens if the array was marked 'clean', and the memalloc triggers
a write-out to the md device.
For the writeout to succeed, the array must be marked 'dirty', and that
requires getting the mddev_lock.
So, before attempting a GFP_KERNEL alloction while holding the lock,
make sure the array is marked 'dirty' (unless it is currently read-only).
Signed-off-by: Neil Brown <neilb@suse.de>
### Diffstat output
./drivers/md/md.c | 29 +++++++++++++++++++++++++++++
./drivers/md/raid1.c | 2 ++
./drivers/md/raid5.c | 3 +++
./include/linux/raid/md.h | 2 +-
4 files changed, 35 insertions(+), 1 deletion(-)
diff .prev/drivers/md/md.c ./drivers/md/md.c
--- .prev/drivers/md/md.c 2007-01-23 11:23:58.000000000 +1100
+++ ./drivers/md/md.c 2007-01-25 12:47:58.000000000 +1100
@@ -3564,6 +3564,8 @@ static int get_bitmap_file(mddev_t * mdd
char *ptr, *buf = NULL;
int err = -ENOMEM;
+ md_allow_write(mddev);
+
file = kmalloc(sizeof(*file), GFP_KERNEL);
if (!file)
goto out;
@@ -5032,6 +5034,33 @@ void md_write_end(mddev_t *mddev)
}
}
+/* md_allow_write(mddev)
+ * Calling this ensures that the array is marked 'active' so that writes
+ * may proceed without blocking. It is important to call this before
+ * attempting a GFP_KERNEL allocation while holding the mddev lock.
+ * Must be called with mddev_lock held.
+ */
+void md_allow_write(mddev_t *mddev)
+{
+ if (!mddev->pers)
+ return;
+ if (mddev->ro)
+ return;
+
+ spin_lock_irq(&mddev->write_lock);
+ if (mddev->in_sync) {
+ mddev->in_sync = 0;
+ set_bit(MD_CHANGE_CLEAN, &mddev->flags);
+ if (mddev->safemode_delay &&
+ mddev->safemode == 0)
+ mddev->safemode = 1;
+ spin_unlock_irq(&mddev->write_lock);
+ md_update_sb(mddev, 0);
+ } else
+ spin_unlock_irq(&mddev->write_lock);
+}
+EXPORT_SYMBOL_GPL(md_allow_write);
+
static DECLARE_WAIT_QUEUE_HEAD(resync_wait);
#define SYNC_MARKS 10
diff .prev/drivers/md/raid1.c ./drivers/md/raid1.c
--- .prev/drivers/md/raid1.c 2007-01-23 11:23:43.000000000 +1100
+++ ./drivers/md/raid1.c 2007-01-25 12:09:43.000000000 +1100
@@ -2050,6 +2050,8 @@ static int raid1_reshape(mddev_t *mddev)
return -EINVAL;
}
+ md_allow_write(mddev);
+
raid_disks = mddev->raid_disks + mddev->delta_disks;
if (raid_disks < conf->raid_disks) {
diff .prev/drivers/md/raid5.c ./drivers/md/raid5.c
--- .prev/drivers/md/raid5.c 2007-01-23 11:13:44.000000000 +1100
+++ ./drivers/md/raid5.c 2007-01-25 12:18:04.000000000 +1100
@@ -399,6 +399,8 @@ static int resize_stripes(raid5_conf_t *
if (newsize <= conf->pool_size)
return 0; /* never bother to shrink */
+ md_allow_write(conf->mddev);
+
/* Step 1 */
sc = kmem_cache_create(conf->cache_name[1-conf->active_name],
sizeof(struct stripe_head)+(newsize-1)*sizeof(struct r5dev),
@@ -3195,6 +3197,7 @@ raid5_store_stripe_cache_size(mddev_t *m
else
break;
}
+ md_allow_write(mddev);
while (new > conf->max_nr_stripes) {
if (grow_one_stripe(conf))
conf->max_nr_stripes++;
diff .prev/include/linux/raid/md.h ./include/linux/raid/md.h
--- .prev/include/linux/raid/md.h 2007-01-25 12:16:57.000000000 +1100
+++ ./include/linux/raid/md.h 2007-01-25 12:17:18.000000000 +1100
@@ -93,7 +93,7 @@ extern int sync_page_io(struct block_dev
struct page *page, int rw);
extern void md_do_sync(mddev_t *mddev);
extern void md_new_event(mddev_t *mddev);
-
+extern void md_allow_write(mddev_t *mddev);
#endif /* CONFIG_MD */
#endif
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2007-01-25 2:29 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-01-22 11:02 change strip_cache_size freeze the whole raid kyle
2007-01-22 12:18 ` Justin Piszcz
2007-01-22 13:09 ` kyle
2007-01-22 13:09 ` kyle
2007-01-22 14:56 ` Justin Piszcz
2007-01-22 15:18 ` kyle
2007-01-22 15:18 ` kyle
2007-01-22 14:57 ` Steve Cousins
2007-01-22 15:01 ` Justin Piszcz
2007-01-23 14:22 ` kyle
2007-01-23 14:22 ` kyle
2007-01-22 15:10 ` Justin Piszcz
2007-01-22 15:13 ` kyle
2007-01-22 15:13 ` kyle
2007-01-22 16:10 ` Liang Yang
2007-01-22 16:10 ` Liang Yang
2007-01-22 20:23 ` Neil Brown
2007-01-22 22:47 ` Neil Brown
2007-01-23 10:57 ` Justin Piszcz
2007-01-24 23:24 ` Justin Piszcz
2007-01-25 0:13 ` Neil Brown
2007-01-25 0:16 ` Justin Piszcz
2007-01-25 2:29 ` Neil Brown
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.