All of lore.kernel.org
 help / color / mirror / Atom feed
* BUG: soft lockup detected on CPU#1!
@ 2006-07-17 12:52 Jochen Heuer
  2006-07-17 14:30 ` Steven Rostedt
  0 siblings, 1 reply; 23+ messages in thread
From: Jochen Heuer @ 2006-07-17 12:52 UTC (permalink / raw)
  To: linux-kernel

Hi,

I have been running 2.6.17 on my desktop system (Asus A8V + Athlon64 X2 3800)
and I am having severe problems with lookups. These only show up when surfing
the net. During compiling or mprime runs --> absolutly no problem.

At first I thought this was related to the S-ATA driver since I got error
messages like these on the console once before it locked up hard (no sysrq!):

ata1: command 0xca timeout, stat 0x50 host_stat 0x4
ata1: status=0x50 { DriveReady SeekComplete }
ata1: command 0xea timeout, stat 0x50 host_stat 0x0
ata1: status=0x50 { DriveReady SeekComplete }

But switching to an IDE drive did not fix the lockups. So I switched to
2.6.18-rc2 and today I got the following reported via dmesg:

Jul 17 09:23:03 [kernel] BUG: soft lockup detected on CPU#1!
Jul 17 09:23:03 [kernel]  [<c0103cd2>] show_trace+0x12/0x20
Jul 17 09:23:03 [kernel]  [<c0103de9>] dump_stack+0x19/0x20
Jul 17 09:23:03 [kernel]  [<c0143e77>] softlockup_tick+0xa7/0xd0
Jul 17 09:23:03 [kernel]  [<c0129422>] run_local_timers+0x12/0x20
Jul 17 09:23:03 [kernel]  [<c012923e>] update_process_times+0x6e/0xa0
Jul 17 09:23:03 [kernel]  [<c011127d>] smp_apic_timer_interrupt+0x6d/0x80
Jul 17 09:23:03 [kernel]  [<c0103942>] apic_timer_interrupt+0x2a/0x30
Jul 17 09:23:03 [kernel]  [<c022df93>] cbc_process_decrypt+0x93/0xf0
Jul 17 09:23:03 [kernel]  [<c022dcbe>] crypt+0xee/0x1e0
Jul 17 09:23:03 [kernel]  [<c022ddef>] crypt_iv_unaligned+0x3f/0xc0
Jul 17 09:23:03 [kernel]  [<c022e23d>] cbc_decrypt_iv+0x3d/0x50
Jul 17 09:23:03 [kernel]  [<c032f6b7>] crypt_convert_scatterlist+0x117/0x170
Jul 17 09:23:03 [kernel]  [<c032f8b2>] crypt_convert+0x142/0x190
Jul 17 09:23:03 [kernel]  [<c032fb82>] kcryptd_do_work+0x42/0x60
Jul 17 09:23:03 [kernel]  [<c012fcff>] run_workqueue+0x6f/0xe0
Jul 17 09:23:03 [kernel]  [<c012fe98>] worker_thread+0x128/0x150
Jul 17 09:23:03 [kernel]  [<c0133364>] kthread+0xa4/0xe0
Jul 17 09:23:03 [kernel]  [<c01010e5>] kernel_thread_helper+0x5/0x10
Jul 17 09:24:17 [kernel] =============================================
Jul 17 09:24:17 [kernel] [ INFO: possible recursive locking detected ]
Jul 17 09:24:17 [kernel] ---------------------------------------------
Jul 17 09:24:17 [kernel] mv/12680 is trying to acquire lock:
Jul 17 09:24:17 [kernel]  (&(&ip->i_lock)->mr_lock){----}, at: [<c01f63b0>]
xfs_ilock+0x60/0xb0
Jul 17 09:24:17 [kernel] but task is already holding lock:
Jul 17 09:24:17 [kernel]  (&(&ip->i_lock)->mr_lock){----}, at: [<c01f63b0>]
xfs_ilock+0x60/0xb0
Jul 17 09:24:17 [kernel] other info that might help us debug this:
Jul 17 09:24:17 [kernel] 4 locks held by mv/12680:
Jul 17 09:24:17 [kernel]  #0:  (&s->s_vfs_rename_mutex){--..}, at: [<c03c2931>]
mutex_lock+0x21/0x30
Jul 17 09:24:17 [kernel]  #1:  (&inode->i_mutex/1){--..}, at: [<c017506b>]
lock_rename+0xbb/0xd0
Jul 17 09:24:17 [kernel]  #2:  (&inode->i_mutex/2){--..}, at: [<c0175052>]
lock_rename+0xa2/0xd0
Jul 17 09:24:17 [kernel]  #3:  (&(&ip->i_lock)->mr_lock){----}, at:
[<c01f63b0>] xfs_ilock+0x60/0xb0
Jul 17 09:24:17 [kernel] stack backtrace:
Jul 17 09:24:17 [kernel]  [<c0103cd2>] show_trace+0x12/0x20
Jul 17 09:24:17 [kernel]  [<c0103de9>] dump_stack+0x19/0x20
Jul 17 09:24:17 [kernel]  [<c01385a9>] print_deadlock_bug+0xb9/0xd0
Jul 17 09:24:17 [kernel]  [<c013862b>] check_deadlock+0x6b/0x80
Jul 17 09:24:17 [kernel]  [<c0139ed4>] __lock_acquire+0x354/0x990
Jul 17 09:24:17 [kernel]  [<c013ac35>] lock_acquire+0x75/0xa0
Jul 17 09:24:17 [kernel]  [<c0136aaf>] down_write+0x3f/0x60
Jul 17 09:24:17 [kernel]  [<c01f63b0>] xfs_ilock+0x60/0xb0
Jul 17 09:24:17 [kernel]  [<c0217981>] xfs_lock_inodes+0xb1/0x120
Jul 17 09:24:17 [kernel]  [<c020ca7b>] xfs_rename+0x20b/0x8e0
Jul 17 09:24:17 [kernel]  [<c022351a>] xfs_vn_rename+0x3a/0x90
Jul 17 09:24:17 [kernel]  [<c017687d>] vfs_rename_dir+0xbd/0xd0
Jul 17 09:24:17 [kernel]  [<c0176a4c>] vfs_rename+0xdc/0x230
Jul 17 09:24:17 [kernel]  [<c0176d02>] do_rename+0x162/0x190
Jul 17 09:24:17 [kernel]  [<c0176d9c>] sys_renameat+0x6c/0x80
Jul 17 09:24:17 [kernel]  [<c0176dd8>] sys_rename+0x28/0x30
Jul 17 09:24:17 [kernel]  [<c0102e15>] sysenter_past_esp+0x56/0x8d

I am not sure if these infos are enough to isolate the problem. If you need any
further infos just let me know.

Best regards,

   Jogi

^ permalink raw reply	[flat|nested] 23+ messages in thread
* BUG: soft lockup detected on CPU#1!
@ 2007-05-02 16:17 brendan powers
  0 siblings, 0 replies; 23+ messages in thread
From: brendan powers @ 2007-05-02 16:17 UTC (permalink / raw)
  To: linux-kernel

Hello, i'm running debin sarge(3.1) with kernel 2.6.16.7 and came
across this kernel oops. It locked up shortly afterwords. Its a
terminal server so there is a lot of different things going on so i'm
not sure exactly what caused this to happen. Anyone have any ideas?

Here is the log of what happened.

CIFS VFS: Error 0xfffffff3 on cifs_get_inode_info in lookup of \.directory
smbfs: Unrecognized mount option domain
BUG: soft lockup detected on CPU#1!

Pid: 8657, comm:             kio_file
EIP: 0060:[get_offset_pmtmr+22/3661] CPU: 1
EIP is at get_offset_pmtmr+0x16/0xe4d
 EFLAGS: 00000246    Not tainted  (2.6.16.7.resara-opteron #1)
EAX: 00906422 EBX: d4d49de8 ECX: 00906416 EDX: 00001008
ESI: 00905639 EDI: 0090641c EBP: 0000000a DS: 007b ES: 007b
CR0: 8005003b CR2: 091d9000 CR3: 36e1dac0 CR4: 000006f0
 [do_gettimeofday+28/164] do_gettimeofday+0x1c/0xa4
 [getnstimeofday+15/39] getnstimeofday+0xf/0x27
 [ktime_get_ts+24/81] ktime_get_ts+0x18/0x51
 [ktime_get+16/58] ktime_get+0x10/0x3a
 [hrtimer_run_queues+45/225] hrtimer_run_queues+0x2d/0xe1
 [run_timer_softirq+34/387] run_timer_softirq+0x22/0x183
 [__do_softirq+91/196] __do_softirq+0x5b/0xc4
 [do_softirq+45/49] do_softirq+0x2d/0x31
 [apic_timer_interrupt+28/36] apic_timer_interrupt+0x1c/0x24
 [generic_fillattr+117/157] generic_fillattr+0x75/0x9d
 [pg0+948724075/1069974528] cifs_getattr+0x1f/0x26 [cifs]
 [vfs_getattr+65/150] vfs_getattr+0x41/0x96
 [vfs_stat_fd+50/69] vfs_stat_fd+0x32/0x45
 [current_fs_time+72/95] current_fs_time+0x48/0x5f
 [dput+27/281] dput+0x1b/0x119
 [mntput_no_expire+20/113] mntput_no_expire+0x14/0x71
 [vfs_stat+15/19] vfs_stat+0xf/0x13
 [sys_stat64+16/39] sys_stat64+0x10/0x27
 [sys_readlink+19/23] sys_readlink+0x13/0x17
 [syscall_call+7/11] syscall_call+0x7/0xb
BUG: soft lockup detected on CPU#0!

Pid: 8694, comm:             kio_file
EIP: 0060:[generic_fillattr+114/157] CPU: 0
EIP is at generic_fillattr+0x72/0x9d
 EFLAGS: 00000202    Not tainted  (2.6.16.7.resara-opteron #1)
EAX: 0000002d EBX: defc7f64 ECX: c481fb04 EDX: 00000001
ESI: 00000000 EDI: 00000000 EBP: e8afc740 DS: 007b ES: 007b
CR0: 8005003b CR2: 091d6a8c CR3: 1a457380 CR4: 000006f0
 [pg0+948724075/1069974528] cifs_getattr+0x1f/0x26 [cifs]
 [vfs_getattr+65/150] vfs_getattr+0x41/0x96
 [vfs_stat_fd+50/69] vfs_stat_fd+0x32/0x45
 [__mark_inode_dirty+38/339] __mark_inode_dirty+0x26/0x153
 [dput+27/281] dput+0x1b/0x119
 [mntput_no_expire+20/113] mntput_no_expire+0x14/0x71
 [vfs_stat+15/19] vfs_stat+0xf/0x13
 [sys_stat64+16/39] sys_stat64+0x10/0x27
 [sys_readlink+19/23] sys_readlink+0x13/0x17
 [syscall_call+7/11] syscall_call+0x7/0xb
 CIFS VFS: Send error in read = -13
May  2 09:44:09 localhost last message repeated 9 times
BUG: soft lockup detected on CPU#3!

Pid: 8684, comm:            konqueror
EIP: 0060:[generic_fillattr+117/157] CPU: 3
EIP is at generic_fillattr+0x75/0x9d
 EFLAGS: 00000202    Not tainted  (2.6.16.7.resara-opteron #1)
EAX: 0000002f EBX: d17d5f64 ECX: c481fb04 EDX: 00000001
ESI: 00000000 EDI: 00000000 EBP: e8afc740 DS: 007b ES: 007b
CR0: 8005003b CR2: ab5f7000 CR3: 1a457840 CR4: 000006f0
 [pg0+948724075/1069974528] cifs_getattr+0x1f/0x26 [cifs]
 [vfs_getattr+65/150] vfs_getattr+0x41/0x96
 [vfs_stat_fd+50/69] vfs_stat_fd+0x32/0x45
 [vfs_stat+15/19] vfs_stat+0xf/0x13
 [sys_stat64+16/39] sys_stat64+0x10/0x27
 [slab_destroy+56/91] slab_destroy+0x38/0x5b
 [syscall_call+7/11] syscall_call+0x7/0xb
 CIFS VFS: Send error in read = -13

^ permalink raw reply	[flat|nested] 23+ messages in thread
* BUG: soft lockup detected on CPU#1!
@ 2009-02-11  7:16 raksac
  2009-02-11  9:21 ` Justin Piszcz
  2009-02-12 21:49 ` Dave Chinner
  0 siblings, 2 replies; 23+ messages in thread
From: raksac @ 2009-02-11  7:16 UTC (permalink / raw)
  To: xfs


Hello,

I am running the 2.6.28 based xfs kernel driver on a
custom kernel with following kernel config enabled.

CONFIG_PREEMPT
CONFIG_DETECT_SOFTLOCKUP

Running the following xfsqa causes a soft lockup. The
configuration is a x86 with Hyperthreading, 4GB RAM
and a AHCI connected JBOD. Its 100% reproducible.

Any suggestions/inputs on where to start debugging the
problem would be much appreciated.

#! /bin/sh
# FS QA Test No. 008
#
# randholes test
#

BUG: soft lockup detected on CPU#1!
 [<4013d525>] softlockup_tick+0x9c/0xaf
 [<40123246>] update_process_times+0x3d/0x60
 [<401100ab>] smp_apic_timer_interrupt+0x52/0x58
 [<40103633>] apic_timer_interrupt+0x1f/0x24
 [<402a1557>] _spin_lock_irqsave+0x48/0x61
 [<f8b8fe30>] xfs_iflush_cluster+0x16d/0x31c [xfs]
 [<f8b9018b>] xfs_iflush+0x1ac/0x271 [xfs]
 [<f8ba49a1>] xfs_inode_flush+0xd6/0xfa [xfs]
 [<f8bb13c8>] xfs_fs_write_inode+0x27/0x40 [xfs]
 [<401789d9>] __writeback_single_inode+0x1b0/0x2ff
 [<40101ad5>] __switch_to+0x23/0x1f9
 [<40178f87>] sync_sb_inodes+0x196/0x261
 [<4017920a>] writeback_inodes+0x67/0xb1
 [<401465df>] wb_kupdate+0x7b/0xe0
 [<40146bc3>] pdflush+0x0/0x1b5
 [<40146ce1>] pdflush+0x11e/0x1b5
 [<40146564>] wb_kupdate+0x0/0xe0
 [<4012be6d>] kthread+0xc1/0xec
 [<4012bdac>] kthread+0x0/0xec
 [<401038b3>] kernel_thread_helper+0x7/0x10
 =======================

Thanks,
Rakesh


      

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 23+ messages in thread
* BUG: soft lockup detected on CPU#1!
@ 2009-02-11  7:57 Rakesh
  0 siblings, 0 replies; 23+ messages in thread
From: Rakesh @ 2009-02-11  7:57 UTC (permalink / raw)
  To: xfs


Hello,

I am running the 2.6.28 based xfs kernel driver on a
custom kernel with following kernel config enabled.

CONFIG_PREEMPT
CONFIG_DETECT_SOFTLOCKUP

Running the following xfsqa causes a soft lockup. The
configuration is a x86 with Hyperthreading, 4GB RAM
and a AHCI connected JBOD. Its 100% reproducible.

Any suggestions/inputs on where to start debugging the
problem would be much appreciated.

#! /bin/sh
# FS QA Test No. 008
#
# randholes test
#

BUG: soft lockup detected on CPU#1!
 [<4013d525>] softlockup_tick+0x9c/0xaf
 [<40123246>] update_process_times+0x3d/0x60
 [<401100ab>] smp_apic_timer_interrupt+0x52/0x58
 [<40103633>] apic_timer_interrupt+0x1f/0x24
 [<402a1557>] _spin_lock_irqsave+0x48/0x61
 [<f8b8fe30>] xfs_iflush_cluster+0x16d/0x31c [xfs]
 [<f8b9018b>] xfs_iflush+0x1ac/0x271 [xfs]
 [<f8ba49a1>] xfs_inode_flush+0xd6/0xfa [xfs]
 [<f8bb13c8>] xfs_fs_write_inode+0x27/0x40 [xfs]
 [<401789d9>] __writeback_single_inode+0x1b0/0x2ff
 [<40101ad5>] __switch_to+0x23/0x1f9
 [<40178f87>] sync_sb_inodes+0x196/0x261
 [<4017920a>] writeback_inodes+0x67/0xb1
 [<401465df>] wb_kupdate+0x7b/0xe0
 [<40146bc3>] pdflush+0x0/0x1b5
 [<40146ce1>] pdflush+0x11e/0x1b5
 [<40146564>] wb_kupdate+0x0/0xe0
 [<4012be6d>] kthread+0xc1/0xec
 [<4012bdac>] kthread+0x0/0xec
 [<401038b3>] kernel_thread_helper+0x7/0x10
 =======================

Thanks,
Rakesh



      

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2009-02-19  8:05 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-07-17 12:52 BUG: soft lockup detected on CPU#1! Jochen Heuer
2006-07-17 14:30 ` Steven Rostedt
2006-07-17 14:48   ` Jochen Heuer
2006-07-21 22:53     ` Jochen Heuer
2006-07-24 13:20       ` Steven Rostedt
2006-07-21 22:58   ` Jochen Heuer
2007-05-02 16:17 brendan powers
2009-02-11  7:16 raksac
2009-02-11  9:21 ` Justin Piszcz
2009-02-11 23:33   ` raksac
2009-02-11 23:36     ` Justin Piszcz
2009-02-12  9:22       ` raksac
2009-02-12 21:55         ` Dave Chinner
2009-02-12 21:59           ` raksac
2009-02-12 22:10             ` Eric Sandeen
2009-02-12 22:16               ` raksac
2009-02-13  4:56                 ` Eric Sandeen
2009-02-19  8:04                   ` raksac
2009-02-13  9:32                 ` Michael Monnerie
2009-02-11 23:34   ` raksac
2009-02-12 21:49 ` Dave Chinner
2009-02-12 21:55   ` raksac
2009-02-11  7:57 Rakesh

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.