linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Fw: RCU detected CPU 1 stall (t=4295904002/751 jiffies) Pid: 902, comm: md1_raid5
@ 2009-05-20  9:46 Janos Haar
  2009-05-21  2:50 ` Paul E. McKenney
  0 siblings, 1 reply; 7+ messages in thread
From: Janos Haar @ 2009-05-20  9:46 UTC (permalink / raw)
  To: linux-kernel

Hello list,

Somebody know, what is this?
May 17 23:12:13 gladiator-afth1 kernel: RCU detected CPU 1 stall 
(t=4295904002/751 jiffies)
May 17 23:12:13 gladiator-afth1 kernel: Pid: 902, comm: md1_raid5 Not 
tainted 2.6.28.10 #1
May 17 23:12:13 gladiator-afth1 kernel: Call Trace:
May 17 23:12:13 gladiator-afth1 kernel:  <IRQ>  [<ffffffff80265395>] ? 
get_timestamp+0x9/0xf
May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff8026854f>] 
__rcu_pending+0x64/0x1e2
May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff80268703>] 
rcu_pending+0x36/0x6f
May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff8023f546>] 
update_process_times+0x37/0x5f
May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff802504a7>] 
tick_periodic+0x6e/0x7a
May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff802504d4>] 
tick_handle_periodic+0x21/0x65
May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff8021ad60>] 
smp_apic_timer_interrupt+0x8f/0xad
May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff8020bf1b>] 
apic_timer_interrupt+0x6b/0x70
May 17 23:12:13 gladiator-afth1 kernel:  <EOI>  [<ffffffff80820b8f>] ? 
_spin_unlock_irqrestore+0x13/0x17
May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff80728720>] ? 
bitmap_daemon_work+0x142/0x3b0
May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff80724c58>] ? 
md_check_recovery+0x1b/0x45b
May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff8071976c>] ? 
raid5d+0x5d/0x503
May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff80726533>] ? 
md_thread+0xd5/0xed
May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff80248e4a>] ? 
autoremove_wake_function+0x0/0x38
May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff8072645e>] ? 
md_thread+0x0/0xed
May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff80248d3b>] ? 
kthread+0x49/0x76
May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff8020c169>] ? 
child_rip+0xa/0x11
May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff80248cf2>] ? 
kthread+0x0/0x76
May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff8020c15f>] ? 
child_rip+0x0/0x11
Neilbrown from the RAID list suggested to ask someone else...(The mail is 
below.)Thanks,Janos Haar----- Original Message ----- 
From: "Janos Haar" <janos.haar@netcenter.hu>
To: "Neil Brown" <neilb@suse.de>
Cc: <linux-raid@vger.kernel.org>
Sent: Tuesday, May 19, 2009 12:30 PM
Subject: Re: RCU detected CPU 1 stall (t=4295904002/751 jiffies) Pid: 902, 
comm: md1_raid5


>
> ----- Original Message ----- 
> From: "Neil Brown" <neilb@suse.de>
> To: "Janos Haar" <janos.haar@netcenter.hu>
> Cc: <linux-raid@vger.kernel.org>
> Sent: Tuesday, May 19, 2009 3:05 AM
> Subject: Re: RCU detected CPU 1 stall (t=4295904002/751 jiffies) Pid: 902, 
> comm: md1_raid5
>
>
>> On Tuesday May 19, janos.haar@netcenter.hu wrote:
>>> Hello list, Neil,
>>>
>>> Somebody can say something about this issue?
>>> I am not surprised, if it is hardware related, but this is on a brand 
>>> new
>>> server, so i am looking for a solution... :-)
>>> May 17 23:12:13 gladiator-afth1 kernel: RCU detected CPU 1 stall
>>> (t=4295904002/751 jiffies)
>>
>> I have no idea what this means.
>> I've occasionally seen this sort of message in early boot then the
>> system continued to work perfectly so I figured it was an early-boot
>> glitch.  I suggest asking someone who understands RCU.
>>
>>>
>>> The entire log is here:
>>> http://download.netcenter.hu/bughunt/20090518/messages
>>>
>>> The system is on the md1, and working, but slowly.
>>
>> How slowly?  Is the slowness due to disk throughput?
>
> No no, this is a fresh and idle server.
> I have configured the disks, raid on another PC, and when it finished, i 
> have copy up the known good, pre-installed sw pack with old 2.6.18.
> This pack is good, tested on many times, and this reports too this issue 
> on this machine. (first)
> I have compiled the 2.6.28.10 on it, it takes about 6 hour! 8-/
> But the 2.6.28.10 reports this too.
>
> The slowness is not disk based, i think, on idle time if i move the 
> selector line in mc, this stopps too for some seconds or i can't type in 
> bash when this happens, and another one RCU message comes to the log...
> (It happens periodically, independently of i am doing something or not.)
>
> I am not sure, it is raid related or not, but the kernel reports only the 
> md1_raid5 pid, not another one.
> This is why i am asking here first.
>
> Thanks anyway. :-)
>
>> Have you tested the individual drives and compared that with the
>> array?
>
> This is a brand new hw, with 4x500GB samsung drive, wich reports no 
> problem at all by smart.
>
>>
>>
>>> If i left the server for 1 day,  it will crash without a saved log.
>>
>> This is a concern!  It usually points to some sort of hardware
>> problem, but it is very hard to trace.
>> Is the power supply rated high enough to support all devices?
>
> I am using 550W good quality new PS, and the PC uses only 55-65W, 
> measured. ;-)
> (1x core2duo, 4x hdd, nothing more interesting)
>
>> I cannot think of anything else to suggest .. except start swapping
>> components until the problem goes away...
>
> In this way, i need to start with the motherboard. 8-(
>
> Thanks a lot,
> Janos Haar
>
>>
>> NeilBrown
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Fw: RCU detected CPU 1 stall (t=4295904002/751 jiffies) Pid: 902, comm: md1_raid5
  2009-05-20  9:46 Fw: RCU detected CPU 1 stall (t=4295904002/751 jiffies) Pid: 902, comm: md1_raid5 Janos Haar
@ 2009-05-21  2:50 ` Paul E. McKenney
  2009-05-21  4:46   ` Fw: RCU detected CPU 1 stall (t=4295904002/751 jiffies) Pid:902, " Janos Haar
  0 siblings, 1 reply; 7+ messages in thread
From: Paul E. McKenney @ 2009-05-21  2:50 UTC (permalink / raw)
  To: Janos Haar; +Cc: linux-kernel

On Wed, May 20, 2009 at 11:46:07AM +0200, Janos Haar wrote:
> Hello list,
>
> Somebody know, what is this?
> May 17 23:12:13 gladiator-afth1 kernel: RCU detected CPU 1 stall 
> (t=4295904002/751 jiffies)
> May 17 23:12:13 gladiator-afth1 kernel: Pid: 902, comm: md1_raid5 Not 
> tainted 2.6.28.10 #1
> May 17 23:12:13 gladiator-afth1 kernel: Call Trace:
> May 17 23:12:13 gladiator-afth1 kernel:  <IRQ>  [<ffffffff80265395>] ? 
> get_timestamp+0x9/0xf
> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff8026854f>] 
> __rcu_pending+0x64/0x1e2
> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff80268703>] 
> rcu_pending+0x36/0x6f
> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff8023f546>] 
> update_process_times+0x37/0x5f
> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff802504a7>] 
> tick_periodic+0x6e/0x7a
> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff802504d4>] 
> tick_handle_periodic+0x21/0x65
> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff8021ad60>] 
> smp_apic_timer_interrupt+0x8f/0xad
> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff8020bf1b>] 
> apic_timer_interrupt+0x6b/0x70
> May 17 23:12:13 gladiator-afth1 kernel:  <EOI>  [<ffffffff80820b8f>] ? 
> _spin_unlock_irqrestore+0x13/0x17

One of the following functions is looping in the kernel.  If you are
running with HZ=250, it has been looping for about three seconds.
Interrupts are enabled, but preemption must be disabled (perhaps due
to !CONFIG_PREEMPT).

							Thanx, Paul

> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff80728720>] ? 
> bitmap_daemon_work+0x142/0x3b0
> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff80724c58>] ? 
> md_check_recovery+0x1b/0x45b
> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff8071976c>] ? 
> raid5d+0x5d/0x503
> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff80726533>] ? 
> md_thread+0xd5/0xed
> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff80248e4a>] ? 
> autoremove_wake_function+0x0/0x38
> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff8072645e>] ? 
> md_thread+0x0/0xed
> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff80248d3b>] ? 
> kthread+0x49/0x76
> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff8020c169>] ? 
> child_rip+0xa/0x11
> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff80248cf2>] ? 
> kthread+0x0/0x76
> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff8020c15f>] ? 
> child_rip+0x0/0x11
> Neilbrown from the RAID list suggested to ask someone else...(The mail is 
> below.)Thanks,Janos Haar----- Original Message ----- From: "Janos Haar" 
> <janos.haar@netcenter.hu>
> To: "Neil Brown" <neilb@suse.de>
> Cc: <linux-raid@vger.kernel.org>
> Sent: Tuesday, May 19, 2009 12:30 PM
> Subject: Re: RCU detected CPU 1 stall (t=4295904002/751 jiffies) Pid: 902, 
> comm: md1_raid5
>
>
>>
>> ----- Original Message ----- From: "Neil Brown" <neilb@suse.de>
>> To: "Janos Haar" <janos.haar@netcenter.hu>
>> Cc: <linux-raid@vger.kernel.org>
>> Sent: Tuesday, May 19, 2009 3:05 AM
>> Subject: Re: RCU detected CPU 1 stall (t=4295904002/751 jiffies) Pid: 902, 
>> comm: md1_raid5
>>
>>
>>> On Tuesday May 19, janos.haar@netcenter.hu wrote:
>>>> Hello list, Neil,
>>>>
>>>> Somebody can say something about this issue?
>>>> I am not surprised, if it is hardware related, but this is on a brand 
>>>> new
>>>> server, so i am looking for a solution... :-)
>>>> May 17 23:12:13 gladiator-afth1 kernel: RCU detected CPU 1 stall
>>>> (t=4295904002/751 jiffies)
>>>
>>> I have no idea what this means.
>>> I've occasionally seen this sort of message in early boot then the
>>> system continued to work perfectly so I figured it was an early-boot
>>> glitch.  I suggest asking someone who understands RCU.
>>>
>>>>
>>>> The entire log is here:
>>>> http://download.netcenter.hu/bughunt/20090518/messages
>>>>
>>>> The system is on the md1, and working, but slowly.
>>>
>>> How slowly?  Is the slowness due to disk throughput?
>>
>> No no, this is a fresh and idle server.
>> I have configured the disks, raid on another PC, and when it finished, i 
>> have copy up the known good, pre-installed sw pack with old 2.6.18.
>> This pack is good, tested on many times, and this reports too this issue 
>> on this machine. (first)
>> I have compiled the 2.6.28.10 on it, it takes about 6 hour! 8-/
>> But the 2.6.28.10 reports this too.
>>
>> The slowness is not disk based, i think, on idle time if i move the 
>> selector line in mc, this stopps too for some seconds or i can't type in 
>> bash when this happens, and another one RCU message comes to the log...
>> (It happens periodically, independently of i am doing something or not.)
>>
>> I am not sure, it is raid related or not, but the kernel reports only the 
>> md1_raid5 pid, not another one.
>> This is why i am asking here first.
>>
>> Thanks anyway. :-)
>>
>>> Have you tested the individual drives and compared that with the
>>> array?
>>
>> This is a brand new hw, with 4x500GB samsung drive, wich reports no 
>> problem at all by smart.
>>
>>>
>>>
>>>> If i left the server for 1 day,  it will crash without a saved log.
>>>
>>> This is a concern!  It usually points to some sort of hardware
>>> problem, but it is very hard to trace.
>>> Is the power supply rated high enough to support all devices?
>>
>> I am using 550W good quality new PS, and the PC uses only 55-65W, 
>> measured. ;-)
>> (1x core2duo, 4x hdd, nothing more interesting)
>>
>>> I cannot think of anything else to suggest .. except start swapping
>>> components until the problem goes away...
>>
>> In this way, i need to start with the motherboard. 8-(
>>
>> Thanks a lot,
>> Janos Haar
>>
>>>
>>> NeilBrown
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html 
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Fw: RCU detected CPU 1 stall (t=4295904002/751 jiffies) Pid:902, comm: md1_raid5
  2009-05-21  2:50 ` Paul E. McKenney
@ 2009-05-21  4:46   ` Janos Haar
  2009-05-21  5:16     ` Paul E. McKenney
  0 siblings, 1 reply; 7+ messages in thread
From: Janos Haar @ 2009-05-21  4:46 UTC (permalink / raw)
  To: paulmck; +Cc: linux-kernel

Paul,

Thank you for your attention.
Yes, the PC makes 2-3 second "pauses" and drop this message again and again.
If i remove the RCU debugging, the message disappears, but the pauses still 
here, and makes 2-3 load on the idle system.
Can i do something?
You suggest to use PREEMPT? (This is a server.)

Thank you,
Janos Haar

----- Original Message ----- 
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: "Janos Haar" <djani22@netcenter.hu>
Cc: <linux-kernel@vger.kernel.org>
Sent: Thursday, May 21, 2009 4:50 AM
Subject: Re: Fw: RCU detected CPU 1 stall (t=4295904002/751 jiffies) 
Pid:902, comm: md1_raid5


> On Wed, May 20, 2009 at 11:46:07AM +0200, Janos Haar wrote:
>> Hello list,
>>
>> Somebody know, what is this?
>> May 17 23:12:13 gladiator-afth1 kernel: RCU detected CPU 1 stall
>> (t=4295904002/751 jiffies)
>> May 17 23:12:13 gladiator-afth1 kernel: Pid: 902, comm: md1_raid5 Not
>> tainted 2.6.28.10 #1
>> May 17 23:12:13 gladiator-afth1 kernel: Call Trace:
>> May 17 23:12:13 gladiator-afth1 kernel:  <IRQ>  [<ffffffff80265395>] ?
>> get_timestamp+0x9/0xf
>> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff8026854f>]
>> __rcu_pending+0x64/0x1e2
>> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff80268703>]
>> rcu_pending+0x36/0x6f
>> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff8023f546>]
>> update_process_times+0x37/0x5f
>> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff802504a7>]
>> tick_periodic+0x6e/0x7a
>> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff802504d4>]
>> tick_handle_periodic+0x21/0x65
>> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff8021ad60>]
>> smp_apic_timer_interrupt+0x8f/0xad
>> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff8020bf1b>]
>> apic_timer_interrupt+0x6b/0x70
>> May 17 23:12:13 gladiator-afth1 kernel:  <EOI>  [<ffffffff80820b8f>] ?
>> _spin_unlock_irqrestore+0x13/0x17
>
> One of the following functions is looping in the kernel.  If you are
> running with HZ=250, it has been looping for about three seconds.
> Interrupts are enabled, but preemption must be disabled (perhaps due
> to !CONFIG_PREEMPT).
>
> Thanx, Paul
>
>> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff80728720>] ?
>> bitmap_daemon_work+0x142/0x3b0
>> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff80724c58>] ?
>> md_check_recovery+0x1b/0x45b
>> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff8071976c>] ?
>> raid5d+0x5d/0x503
>> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff80726533>] ?
>> md_thread+0xd5/0xed
>> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff80248e4a>] ?
>> autoremove_wake_function+0x0/0x38
>> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff8072645e>] ?
>> md_thread+0x0/0xed
>> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff80248d3b>] ?
>> kthread+0x49/0x76
>> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff8020c169>] ?
>> child_rip+0xa/0x11
>> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff80248cf2>] ?
>> kthread+0x0/0x76
>> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff8020c15f>] ?
>> child_rip+0x0/0x11
>> Neilbrown from the RAID list suggested to ask someone else...(The mail is
>> below.)Thanks,Janos Haar----- Original Message ----- From: "Janos Haar"
>> <janos.haar@netcenter.hu>
>> To: "Neil Brown" <neilb@suse.de>
>> Cc: <linux-raid@vger.kernel.org>
>> Sent: Tuesday, May 19, 2009 12:30 PM
>> Subject: Re: RCU detected CPU 1 stall (t=4295904002/751 jiffies) Pid: 
>> 902,
>> comm: md1_raid5
>>
>>
>>>
>>> ----- Original Message ----- From: "Neil Brown" <neilb@suse.de>
>>> To: "Janos Haar" <janos.haar@netcenter.hu>
>>> Cc: <linux-raid@vger.kernel.org>
>>> Sent: Tuesday, May 19, 2009 3:05 AM
>>> Subject: Re: RCU detected CPU 1 stall (t=4295904002/751 jiffies) Pid: 
>>> 902,
>>> comm: md1_raid5
>>>
>>>
>>>> On Tuesday May 19, janos.haar@netcenter.hu wrote:
>>>>> Hello list, Neil,
>>>>>
>>>>> Somebody can say something about this issue?
>>>>> I am not surprised, if it is hardware related, but this is on a brand
>>>>> new
>>>>> server, so i am looking for a solution... :-)
>>>>> May 17 23:12:13 gladiator-afth1 kernel: RCU detected CPU 1 stall
>>>>> (t=4295904002/751 jiffies)
>>>>
>>>> I have no idea what this means.
>>>> I've occasionally seen this sort of message in early boot then the
>>>> system continued to work perfectly so I figured it was an early-boot
>>>> glitch.  I suggest asking someone who understands RCU.
>>>>
>>>>>
>>>>> The entire log is here:
>>>>> http://download.netcenter.hu/bughunt/20090518/messages
>>>>>
>>>>> The system is on the md1, and working, but slowly.
>>>>
>>>> How slowly?  Is the slowness due to disk throughput?
>>>
>>> No no, this is a fresh and idle server.
>>> I have configured the disks, raid on another PC, and when it finished, i
>>> have copy up the known good, pre-installed sw pack with old 2.6.18.
>>> This pack is good, tested on many times, and this reports too this issue
>>> on this machine. (first)
>>> I have compiled the 2.6.28.10 on it, it takes about 6 hour! 8-/
>>> But the 2.6.28.10 reports this too.
>>>
>>> The slowness is not disk based, i think, on idle time if i move the
>>> selector line in mc, this stopps too for some seconds or i can't type in
>>> bash when this happens, and another one RCU message comes to the log...
>>> (It happens periodically, independently of i am doing something or not.)
>>>
>>> I am not sure, it is raid related or not, but the kernel reports only 
>>> the
>>> md1_raid5 pid, not another one.
>>> This is why i am asking here first.
>>>
>>> Thanks anyway. :-)
>>>
>>>> Have you tested the individual drives and compared that with the
>>>> array?
>>>
>>> This is a brand new hw, with 4x500GB samsung drive, wich reports no
>>> problem at all by smart.
>>>
>>>>
>>>>
>>>>> If i left the server for 1 day,  it will crash without a saved log.
>>>>
>>>> This is a concern!  It usually points to some sort of hardware
>>>> problem, but it is very hard to trace.
>>>> Is the power supply rated high enough to support all devices?
>>>
>>> I am using 550W good quality new PS, and the PC uses only 55-65W,
>>> measured. ;-)
>>> (1x core2duo, 4x hdd, nothing more interesting)
>>>
>>>> I cannot think of anything else to suggest .. except start swapping
>>>> components until the problem goes away...
>>>
>>> In this way, i need to start with the motherboard. 8-(
>>>
>>> Thanks a lot,
>>> Janos Haar
>>>
>>>>
>>>> NeilBrown
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" 
>> in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at  http://www.tux.org/lkml/ 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Fw: RCU detected CPU 1 stall (t=4295904002/751 jiffies) Pid:902, comm: md1_raid5
  2009-05-21  4:46   ` Fw: RCU detected CPU 1 stall (t=4295904002/751 jiffies) Pid:902, " Janos Haar
@ 2009-05-21  5:16     ` Paul E. McKenney
  2009-05-21  6:50       ` Neil Brown
  0 siblings, 1 reply; 7+ messages in thread
From: Paul E. McKenney @ 2009-05-21  5:16 UTC (permalink / raw)
  To: Janos Haar; +Cc: linux-kernel, neilb

On Thu, May 21, 2009 at 06:46:15AM +0200, Janos Haar wrote:
> Paul,
>
> Thank you for your attention.
> Yes, the PC makes 2-3 second "pauses" and drop this message again and 
> again.
> If i remove the RCU debugging, the message disappears, but the pauses still 
> here, and makes 2-3 load on the idle system.
> Can i do something?
> You suggest to use PREEMPT? (This is a server.)

One possibility is that the lock that bitmap_daemon_work() acquires is
being held for too long.  Another possibility is the list traversal in
md_check_recovery() that might loop for a long time if the list were
excessively long or could be temporarily tied in a knot.

Neil, thoughts?

							Thanx, Paul

> Thank you,
> Janos Haar
>
> ----- Original Message ----- From: "Paul E. McKenney" 
> <paulmck@linux.vnet.ibm.com>
> To: "Janos Haar" <djani22@netcenter.hu>
> Cc: <linux-kernel@vger.kernel.org>
> Sent: Thursday, May 21, 2009 4:50 AM
> Subject: Re: Fw: RCU detected CPU 1 stall (t=4295904002/751 jiffies) 
> Pid:902, comm: md1_raid5
>
>
>> On Wed, May 20, 2009 at 11:46:07AM +0200, Janos Haar wrote:
>>> Hello list,
>>>
>>> Somebody know, what is this?
>>> May 17 23:12:13 gladiator-afth1 kernel: RCU detected CPU 1 stall
>>> (t=4295904002/751 jiffies)
>>> May 17 23:12:13 gladiator-afth1 kernel: Pid: 902, comm: md1_raid5 Not
>>> tainted 2.6.28.10 #1
>>> May 17 23:12:13 gladiator-afth1 kernel: Call Trace:
>>> May 17 23:12:13 gladiator-afth1 kernel:  <IRQ>  [<ffffffff80265395>] ?
>>> get_timestamp+0x9/0xf
>>> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff8026854f>]
>>> __rcu_pending+0x64/0x1e2
>>> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff80268703>]
>>> rcu_pending+0x36/0x6f
>>> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff8023f546>]
>>> update_process_times+0x37/0x5f
>>> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff802504a7>]
>>> tick_periodic+0x6e/0x7a
>>> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff802504d4>]
>>> tick_handle_periodic+0x21/0x65
>>> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff8021ad60>]
>>> smp_apic_timer_interrupt+0x8f/0xad
>>> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff8020bf1b>]
>>> apic_timer_interrupt+0x6b/0x70
>>> May 17 23:12:13 gladiator-afth1 kernel:  <EOI>  [<ffffffff80820b8f>] ?
>>> _spin_unlock_irqrestore+0x13/0x17
>>
>> One of the following functions is looping in the kernel.  If you are
>> running with HZ=250, it has been looping for about three seconds.
>> Interrupts are enabled, but preemption must be disabled (perhaps due
>> to !CONFIG_PREEMPT).
>>
>> Thanx, Paul
>>
>>> May 17 23:12:13 gladiator-afth1 kernel:  [<ffffffff80728720>] ?
>>> bitmap_daemon_work+0x142/0x3b0
>>> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff80724c58>] ?
>>> md_check_recovery+0x1b/0x45b
>>> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff8071976c>] ?
>>> raid5d+0x5d/0x503
>>> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff80726533>] ?
>>> md_thread+0xd5/0xed
>>> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff80248e4a>] ?
>>> autoremove_wake_function+0x0/0x38
>>> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff8072645e>] ?
>>> md_thread+0x0/0xed
>>> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff80248d3b>] ?
>>> kthread+0x49/0x76
>>> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff8020c169>] ?
>>> child_rip+0xa/0x11
>>> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff80248cf2>] ?
>>> kthread+0x0/0x76
>>> May 17 23:12:18 gladiator-afth1 kernel:  [<ffffffff8020c15f>] ?
>>> child_rip+0x0/0x11
>>> Neilbrown from the RAID list suggested to ask someone else...(The mail is
>>> below.)Thanks,Janos Haar----- Original Message ----- From: "Janos Haar"
>>> <janos.haar@netcenter.hu>
>>> To: "Neil Brown" <neilb@suse.de>
>>> Cc: <linux-raid@vger.kernel.org>
>>> Sent: Tuesday, May 19, 2009 12:30 PM
>>> Subject: Re: RCU detected CPU 1 stall (t=4295904002/751 jiffies) Pid: 
>>> 902,
>>> comm: md1_raid5
>>>
>>>
>>>>
>>>> ----- Original Message ----- From: "Neil Brown" <neilb@suse.de>
>>>> To: "Janos Haar" <janos.haar@netcenter.hu>
>>>> Cc: <linux-raid@vger.kernel.org>
>>>> Sent: Tuesday, May 19, 2009 3:05 AM
>>>> Subject: Re: RCU detected CPU 1 stall (t=4295904002/751 jiffies) Pid: 
>>>> 902,
>>>> comm: md1_raid5
>>>>
>>>>
>>>>> On Tuesday May 19, janos.haar@netcenter.hu wrote:
>>>>>> Hello list, Neil,
>>>>>>
>>>>>> Somebody can say something about this issue?
>>>>>> I am not surprised, if it is hardware related, but this is on a brand
>>>>>> new
>>>>>> server, so i am looking for a solution... :-)
>>>>>> May 17 23:12:13 gladiator-afth1 kernel: RCU detected CPU 1 stall
>>>>>> (t=4295904002/751 jiffies)
>>>>>
>>>>> I have no idea what this means.
>>>>> I've occasionally seen this sort of message in early boot then the
>>>>> system continued to work perfectly so I figured it was an early-boot
>>>>> glitch.  I suggest asking someone who understands RCU.
>>>>>
>>>>>>
>>>>>> The entire log is here:
>>>>>> http://download.netcenter.hu/bughunt/20090518/messages
>>>>>>
>>>>>> The system is on the md1, and working, but slowly.
>>>>>
>>>>> How slowly?  Is the slowness due to disk throughput?
>>>>
>>>> No no, this is a fresh and idle server.
>>>> I have configured the disks, raid on another PC, and when it finished, i
>>>> have copy up the known good, pre-installed sw pack with old 2.6.18.
>>>> This pack is good, tested on many times, and this reports too this issue
>>>> on this machine. (first)
>>>> I have compiled the 2.6.28.10 on it, it takes about 6 hour! 8-/
>>>> But the 2.6.28.10 reports this too.
>>>>
>>>> The slowness is not disk based, i think, on idle time if i move the
>>>> selector line in mc, this stopps too for some seconds or i can't type in
>>>> bash when this happens, and another one RCU message comes to the log...
>>>> (It happens periodically, independently of i am doing something or not.)
>>>>
>>>> I am not sure, it is raid related or not, but the kernel reports only 
>>>> the
>>>> md1_raid5 pid, not another one.
>>>> This is why i am asking here first.
>>>>
>>>> Thanks anyway. :-)
>>>>
>>>>> Have you tested the individual drives and compared that with the
>>>>> array?
>>>>
>>>> This is a brand new hw, with 4x500GB samsung drive, wich reports no
>>>> problem at all by smart.
>>>>
>>>>>
>>>>>
>>>>>> If i left the server for 1 day,  it will crash without a saved log.
>>>>>
>>>>> This is a concern!  It usually points to some sort of hardware
>>>>> problem, but it is very hard to trace.
>>>>> Is the power supply rated high enough to support all devices?
>>>>
>>>> I am using 550W good quality new PS, and the PC uses only 55-65W,
>>>> measured. ;-)
>>>> (1x core2duo, 4x hdd, nothing more interesting)
>>>>
>>>>> I cannot think of anything else to suggest .. except start swapping
>>>>> components until the problem goes away...
>>>>
>>>> In this way, i need to start with the motherboard. 8-(
>>>>
>>>> Thanks a lot,
>>>> Janos Haar
>>>>
>>>>>
>>>>> NeilBrown
>>>>
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" 
>>> in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>> Please read the FAQ at  http://www.tux.org/lkml/ 
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Fw: RCU detected CPU 1 stall (t=4295904002/751 jiffies) Pid:902, comm: md1_raid5
  2009-05-21  5:16     ` Paul E. McKenney
@ 2009-05-21  6:50       ` Neil Brown
  2009-05-21  9:50         ` Fw: RCU detected CPU 1 stall (t=4295904002/751 jiffies)Pid:902, " Janos Haar
  2009-05-21 21:53         ` Janos Haar
  0 siblings, 2 replies; 7+ messages in thread
From: Neil Brown @ 2009-05-21  6:50 UTC (permalink / raw)
  To: paulmck; +Cc: Janos Haar, linux-kernel

On Wednesday May 20, paulmck@linux.vnet.ibm.com wrote:
> On Thu, May 21, 2009 at 06:46:15AM +0200, Janos Haar wrote:
> > Paul,
> >
> > Thank you for your attention.
> > Yes, the PC makes 2-3 second "pauses" and drop this message again and 
> > again.
> > If i remove the RCU debugging, the message disappears, but the pauses still 
> > here, and makes 2-3 load on the idle system.
> > Can i do something?
> > You suggest to use PREEMPT? (This is a server.)
> 
> One possibility is that the lock that bitmap_daemon_work() acquires is
> being held for too long.  Another possibility is the list traversal in
> md_check_recovery() that might loop for a long time if the list were
> excessively long or could be temporarily tied in a knot.
> 
> Neil, thoughts?
> 

I would be surprised if any of these things take as long as 3 seconds
(or even 1 second) but I cannot completely rule it out.

I assume that you mean 3 seconds of continuous running with no
sleeping, so it cannot be a slow kmalloc that is causing the delay?

bitmap_daemon_work is the most likely candidate as bitmap->chunks
can be very large (thousands, probably not millions though).
Taking and dropping the lock every time around that loop doesn't
really make much sense, does it....
And it looks like it could actually be optimised quite a bit to skip a
lot of the iterations in most cases - there are two places where we
can accelerate 'j' quite a lot.

Janos: Can you try this and see if it makes a difference?
Thanks.

NeilBrown

diff --git a/drivers/md/bitmap.c b/drivers/md/bitmap.c
index 47c68bc..56df1ce 100644
--- a/drivers/md/bitmap.c
+++ b/drivers/md/bitmap.c
@@ -1097,14 +1097,12 @@ void bitmap_daemon_work(struct bitmap *bitmap)
 	}
 	bitmap->allclean = 1;
 
+	spin_lock_irqsave(&bitmap->lock, flags);
 	for (j = 0; j < bitmap->chunks; j++) {
 		bitmap_counter_t *bmc;
-		spin_lock_irqsave(&bitmap->lock, flags);
-		if (!bitmap->filemap) {
+		if (!bitmap->filemap)
 			/* error or shutdown */
-			spin_unlock_irqrestore(&bitmap->lock, flags);
 			break;
-		}
 
 		page = filemap_get_page(bitmap, j);
 
@@ -1121,6 +1119,8 @@ void bitmap_daemon_work(struct bitmap *bitmap)
 					write_page(bitmap, page, 0);
 					bitmap->allclean = 0;
 				}
+				spin_lock_irqsave(&bitmap->lock, flags);
+				j |= (PAGE_BITS - 1);
 				continue;
 			}
 
@@ -1181,9 +1181,10 @@ void bitmap_daemon_work(struct bitmap *bitmap)
 					ext2_clear_bit(file_page_offset(j), paddr);
 				kunmap_atomic(paddr, KM_USER0);
 			}
-		}
-		spin_unlock_irqrestore(&bitmap->lock, flags);
+		} else
+			j |= PAGE_COUNTER_MASK;
 	}
+	spin_unlock_irqrestore(&bitmap->lock, flags);
 
 	/* now sync the final page */
 	if (lastpage != NULL) {

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: Fw: RCU detected CPU 1 stall (t=4295904002/751 jiffies)Pid:902, comm: md1_raid5
  2009-05-21  6:50       ` Neil Brown
@ 2009-05-21  9:50         ` Janos Haar
  2009-05-21 21:53         ` Janos Haar
  1 sibling, 0 replies; 7+ messages in thread
From: Janos Haar @ 2009-05-21  9:50 UTC (permalink / raw)
  To: Neil Brown; +Cc: paulmck, linux-kernel

Hello,

Just a brief note preliminary:
I am still compiling my kernel (without the patch) for preemtion enabled, 
the load is 2.00.
I have removed the bitmap from my array, and thel load falls down to 1.00, 
the delays are gone.
(The actual kernek not supports RCU debugging, so there is no error 
message.)

I need to run now, i will try the patch later....
Thanks a lot,

Janos Haar

[root@gladiator-afth1 ~]# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
[multipath] [faulty]
md1 : active raid6 sdd2[3] sdc2[1] sdb2[0] sda2[2]
      19550976 blocks level 6, 16k chunk, algorithm 2 [4/4] [UUUU]

md0 : active raid1 sdd1[3] sdc1[1] sdb1[0] sda1[2]
      104320 blocks [4/4] [UUUU]

unused devices: <none>
[root@gladiator-afth1 ~]#

The internal bitmap was in md1.

----- Original Message ----- 
From: "Neil Brown" <neilb@suse.de>
To: <paulmck@linux.vnet.ibm.com>
Cc: "Janos Haar" <janos.haar@netcenter.hu>; <linux-kernel@vger.kernel.org>
Sent: Thursday, May 21, 2009 8:50 AM
Subject: Re: Fw: RCU detected CPU 1 stall (t=4295904002/751 jiffies)Pid:902, 
comm: md1_raid5


> On Wednesday May 20, paulmck@linux.vnet.ibm.com wrote:
>> On Thu, May 21, 2009 at 06:46:15AM +0200, Janos Haar wrote:
>> > Paul,
>> >
>> > Thank you for your attention.
>> > Yes, the PC makes 2-3 second "pauses" and drop this message again and
>> > again.
>> > If i remove the RCU debugging, the message disappears, but the pauses 
>> > still
>> > here, and makes 2-3 load on the idle system.
>> > Can i do something?
>> > You suggest to use PREEMPT? (This is a server.)
>>
>> One possibility is that the lock that bitmap_daemon_work() acquires is
>> being held for too long.  Another possibility is the list traversal in
>> md_check_recovery() that might loop for a long time if the list were
>> excessively long or could be temporarily tied in a knot.
>>
>> Neil, thoughts?
>>
>
> I would be surprised if any of these things take as long as 3 seconds
> (or even 1 second) but I cannot completely rule it out.
>
> I assume that you mean 3 seconds of continuous running with no
> sleeping, so it cannot be a slow kmalloc that is causing the delay?
>
> bitmap_daemon_work is the most likely candidate as bitmap->chunks
> can be very large (thousands, probably not millions though).
> Taking and dropping the lock every time around that loop doesn't
> really make much sense, does it....
> And it looks like it could actually be optimised quite a bit to skip a
> lot of the iterations in most cases - there are two places where we
> can accelerate 'j' quite a lot.
>
> Janos: Can you try this and see if it makes a difference?
> Thanks.
>
> NeilBrown
>
> diff --git a/drivers/md/bitmap.c b/drivers/md/bitmap.c
> index 47c68bc..56df1ce 100644
> --- a/drivers/md/bitmap.c
> +++ b/drivers/md/bitmap.c
> @@ -1097,14 +1097,12 @@ void bitmap_daemon_work(struct bitmap *bitmap)
>  }
>  bitmap->allclean = 1;
>
> + spin_lock_irqsave(&bitmap->lock, flags);
>  for (j = 0; j < bitmap->chunks; j++) {
>  bitmap_counter_t *bmc;
> - spin_lock_irqsave(&bitmap->lock, flags);
> - if (!bitmap->filemap) {
> + if (!bitmap->filemap)
>  /* error or shutdown */
> - spin_unlock_irqrestore(&bitmap->lock, flags);
>  break;
> - }
>
>  page = filemap_get_page(bitmap, j);
>
> @@ -1121,6 +1119,8 @@ void bitmap_daemon_work(struct bitmap *bitmap)
>  write_page(bitmap, page, 0);
>  bitmap->allclean = 0;
>  }
> + spin_lock_irqsave(&bitmap->lock, flags);
> + j |= (PAGE_BITS - 1);
>  continue;
>  }
>
> @@ -1181,9 +1181,10 @@ void bitmap_daemon_work(struct bitmap *bitmap)
>  ext2_clear_bit(file_page_offset(j), paddr);
>  kunmap_atomic(paddr, KM_USER0);
>  }
> - }
> - spin_unlock_irqrestore(&bitmap->lock, flags);
> + } else
> + j |= PAGE_COUNTER_MASK;
>  }
> + spin_unlock_irqrestore(&bitmap->lock, flags);
>
>  /* now sync the final page */
>  if (lastpage != NULL) {
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/ 


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Fw: RCU detected CPU 1 stall (t=4295904002/751 jiffies)Pid:902, comm: md1_raid5
  2009-05-21  6:50       ` Neil Brown
  2009-05-21  9:50         ` Fw: RCU detected CPU 1 stall (t=4295904002/751 jiffies)Pid:902, " Janos Haar
@ 2009-05-21 21:53         ` Janos Haar
  1 sibling, 0 replies; 7+ messages in thread
From: Janos Haar @ 2009-05-21 21:53 UTC (permalink / raw)
  To: Neil Brown; +Cc: paulmck, linux-kernel

Neil, Paul,

The problem solved.
It was a bios bug.
(The fedora install CD makes the same, and i am checked with the latest BIOS 
version, and the delays are gone. 8-)

Thanks for all help for you too!

Janos Haar

----- Original Message ----- 
From: "Neil Brown" <neilb@suse.de>
To: <paulmck@linux.vnet.ibm.com>
Cc: "Janos Haar" <janos.haar@netcenter.hu>; <linux-kernel@vger.kernel.org>
Sent: Thursday, May 21, 2009 8:50 AM
Subject: Re: Fw: RCU detected CPU 1 stall (t=4295904002/751 jiffies)Pid:902, 
comm: md1_raid5


> On Wednesday May 20, paulmck@linux.vnet.ibm.com wrote:
>> On Thu, May 21, 2009 at 06:46:15AM +0200, Janos Haar wrote:
>> > Paul,
>> >
>> > Thank you for your attention.
>> > Yes, the PC makes 2-3 second "pauses" and drop this message again and
>> > again.
>> > If i remove the RCU debugging, the message disappears, but the pauses 
>> > still
>> > here, and makes 2-3 load on the idle system.
>> > Can i do something?
>> > You suggest to use PREEMPT? (This is a server.)
>>
>> One possibility is that the lock that bitmap_daemon_work() acquires is
>> being held for too long.  Another possibility is the list traversal in
>> md_check_recovery() that might loop for a long time if the list were
>> excessively long or could be temporarily tied in a knot.
>>
>> Neil, thoughts?
>>
>
> I would be surprised if any of these things take as long as 3 seconds
> (or even 1 second) but I cannot completely rule it out.
>
> I assume that you mean 3 seconds of continuous running with no
> sleeping, so it cannot be a slow kmalloc that is causing the delay?
>
> bitmap_daemon_work is the most likely candidate as bitmap->chunks
> can be very large (thousands, probably not millions though).
> Taking and dropping the lock every time around that loop doesn't
> really make much sense, does it....
> And it looks like it could actually be optimised quite a bit to skip a
> lot of the iterations in most cases - there are two places where we
> can accelerate 'j' quite a lot.
>
> Janos: Can you try this and see if it makes a difference?
> Thanks.
>
> NeilBrown
>
> diff --git a/drivers/md/bitmap.c b/drivers/md/bitmap.c
> index 47c68bc..56df1ce 100644
> --- a/drivers/md/bitmap.c
> +++ b/drivers/md/bitmap.c
> @@ -1097,14 +1097,12 @@ void bitmap_daemon_work(struct bitmap *bitmap)
>  }
>  bitmap->allclean = 1;
>
> + spin_lock_irqsave(&bitmap->lock, flags);
>  for (j = 0; j < bitmap->chunks; j++) {
>  bitmap_counter_t *bmc;
> - spin_lock_irqsave(&bitmap->lock, flags);
> - if (!bitmap->filemap) {
> + if (!bitmap->filemap)
>  /* error or shutdown */
> - spin_unlock_irqrestore(&bitmap->lock, flags);
>  break;
> - }
>
>  page = filemap_get_page(bitmap, j);
>
> @@ -1121,6 +1119,8 @@ void bitmap_daemon_work(struct bitmap *bitmap)
>  write_page(bitmap, page, 0);
>  bitmap->allclean = 0;
>  }
> + spin_lock_irqsave(&bitmap->lock, flags);
> + j |= (PAGE_BITS - 1);
>  continue;
>  }
>
> @@ -1181,9 +1181,10 @@ void bitmap_daemon_work(struct bitmap *bitmap)
>  ext2_clear_bit(file_page_offset(j), paddr);
>  kunmap_atomic(paddr, KM_USER0);
>  }
> - }
> - spin_unlock_irqrestore(&bitmap->lock, flags);
> + } else
> + j |= PAGE_COUNTER_MASK;
>  }
> + spin_unlock_irqrestore(&bitmap->lock, flags);
>
>  /* now sync the final page */
>  if (lastpage != NULL) {
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/ 


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2009-05-21 21:58 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-05-20  9:46 Fw: RCU detected CPU 1 stall (t=4295904002/751 jiffies) Pid: 902, comm: md1_raid5 Janos Haar
2009-05-21  2:50 ` Paul E. McKenney
2009-05-21  4:46   ` Fw: RCU detected CPU 1 stall (t=4295904002/751 jiffies) Pid:902, " Janos Haar
2009-05-21  5:16     ` Paul E. McKenney
2009-05-21  6:50       ` Neil Brown
2009-05-21  9:50         ` Fw: RCU detected CPU 1 stall (t=4295904002/751 jiffies)Pid:902, " Janos Haar
2009-05-21 21:53         ` Janos Haar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).