All of lore.kernel.org
 help / color / mirror / Atom feed
* Hang in XFS reclaim on 3.7.0-rc3
@ 2012-10-29 20:03 ` Torsten Kaiser
  0 siblings, 0 replies; 31+ messages in thread
From: Torsten Kaiser @ 2012-10-29 20:03 UTC (permalink / raw)
  To: xfs; +Cc: Linux Kernel

After experiencing a hang of all IO yesterday (
http://marc.info/?l=linux-kernel&m=135142236520624&w=2 ), I turned on
LOCKDEP after upgrading to -rc3.

I then tried to replicate the load that hung yesterday and got the
following lockdep report, implicating XFS instead of by stacking swap
onto dm-crypt and md.

Oct 29 20:27:11 thoregon kernel: [ 2675.571958] usb 7-2: USB
disconnect, device number 2
Oct 29 20:30:01 thoregon kernel: [ 2844.971913]
Oct 29 20:30:01 thoregon kernel: [ 2844.971920]
=================================
Oct 29 20:30:01 thoregon kernel: [ 2844.971921] [ INFO: inconsistent
lock state ]
Oct 29 20:30:01 thoregon kernel: [ 2844.971924] 3.7.0-rc3 #1 Not tainted
Oct 29 20:30:01 thoregon kernel: [ 2844.971925]
---------------------------------
Oct 29 20:30:01 thoregon kernel: [ 2844.971927] inconsistent
{RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
Oct 29 20:30:01 thoregon kernel: [ 2844.971929] kswapd0/725
[HC0[0]:SC0[0]:HE1:SE1] takes:
Oct 29 20:30:01 thoregon kernel: [ 2844.971931]
(&(&ip->i_lock)->mr_lock){++++?.}, at: [<ffffffff811e7ef4>]
xfs_ilock+0x84/0xb0
Oct 29 20:30:01 thoregon kernel: [ 2844.971941] {RECLAIM_FS-ON-W}
state was registered at:
Oct 29 20:30:01 thoregon kernel: [ 2844.971942]   [<ffffffff8108137e>]
mark_held_locks+0x7e/0x130
Oct 29 20:30:01 thoregon kernel: [ 2844.971947]   [<ffffffff81081a63>]
lockdep_trace_alloc+0x63/0xc0
Oct 29 20:30:01 thoregon kernel: [ 2844.971949]   [<ffffffff810e9dd5>]
kmem_cache_alloc+0x35/0xe0
Oct 29 20:30:01 thoregon kernel: [ 2844.971952]   [<ffffffff810dba31>]
vm_map_ram+0x271/0x770
Oct 29 20:30:01 thoregon kernel: [ 2844.971955]   [<ffffffff811e10a6>]
_xfs_buf_map_pages+0x46/0xe0
Oct 29 20:30:01 thoregon kernel: [ 2844.971959]   [<ffffffff811e1fba>]
xfs_buf_get_map+0x8a/0x130
Oct 29 20:30:01 thoregon kernel: [ 2844.971961]   [<ffffffff81233849>]
xfs_trans_get_buf_map+0xa9/0xd0
Oct 29 20:30:01 thoregon kernel: [ 2844.971964]   [<ffffffff8121e339>]
xfs_ifree_cluster+0x129/0x670
Oct 29 20:30:01 thoregon kernel: [ 2844.971967]   [<ffffffff8121f959>]
xfs_ifree+0xe9/0xf0
Oct 29 20:30:01 thoregon kernel: [ 2844.971969]   [<ffffffff811f4abf>]
xfs_inactive+0x2af/0x480
Oct 29 20:30:01 thoregon kernel: [ 2844.971972]   [<ffffffff811efb90>]
xfs_fs_evict_inode+0x70/0x80
Oct 29 20:30:01 thoregon kernel: [ 2844.971974]   [<ffffffff8110cb8f>]
evict+0xaf/0x1b0
Oct 29 20:30:01 thoregon kernel: [ 2844.971977]   [<ffffffff8110cd95>]
iput+0x105/0x210
Oct 29 20:30:01 thoregon kernel: [ 2844.971979]   [<ffffffff811070d0>]
dentry_iput+0xa0/0xe0
Oct 29 20:30:01 thoregon kernel: [ 2844.971981]   [<ffffffff81108310>]
dput+0x150/0x280
Oct 29 20:30:01 thoregon kernel: [ 2844.971983]   [<ffffffff811020fb>]
sys_renameat+0x21b/0x290
Oct 29 20:30:01 thoregon kernel: [ 2844.971986]   [<ffffffff81102186>]
sys_rename+0x16/0x20
Oct 29 20:30:01 thoregon kernel: [ 2844.971988]   [<ffffffff816b2292>]
system_call_fastpath+0x16/0x1b
Oct 29 20:30:01 thoregon kernel: [ 2844.971992] irq event stamp: 155377
Oct 29 20:30:01 thoregon kernel: [ 2844.971993] hardirqs last  enabled
at (155377): [<ffffffff816ae1ed>] mutex_trylock+0xfd/0x170
Oct 29 20:30:01 thoregon kernel: [ 2844.971997] hardirqs last disabled
at (155376): [<ffffffff816ae12e>] mutex_trylock+0x3e/0x170
Oct 29 20:30:01 thoregon kernel: [ 2844.971999] softirqs last  enabled
at (155368): [<ffffffff81042fb1>] __do_softirq+0x111/0x170
Oct 29 20:30:01 thoregon kernel: [ 2844.972002] softirqs last disabled
at (155353): [<ffffffff816b33bc>] call_softirq+0x1c/0x30
Oct 29 20:30:01 thoregon kernel: [ 2844.972004]
Oct 29 20:30:01 thoregon kernel: [ 2844.972004] other info that might
help us debug this:
Oct 29 20:30:01 thoregon kernel: [ 2844.972006]  Possible unsafe
locking scenario:
Oct 29 20:30:01 thoregon kernel: [ 2844.972006]
Oct 29 20:30:01 thoregon kernel: [ 2844.972007]        CPU0
Oct 29 20:30:01 thoregon kernel: [ 2844.972007]        ----
Oct 29 20:30:01 thoregon kernel: [ 2844.972008]   lock(&(&ip->i_lock)->mr_lock);
Oct 29 20:30:01 thoregon kernel: [ 2844.972009]   <Interrupt>
Oct 29 20:30:01 thoregon kernel: [ 2844.972010]
lock(&(&ip->i_lock)->mr_lock);
Oct 29 20:30:01 thoregon kernel: [ 2844.972012]
Oct 29 20:30:01 thoregon kernel: [ 2844.972012]  *** DEADLOCK ***
Oct 29 20:30:01 thoregon kernel: [ 2844.972012]
Oct 29 20:30:01 thoregon kernel: [ 2844.972013] 3 locks held by kswapd0/725:
Oct 29 20:30:01 thoregon kernel: [ 2844.972014]  #0:
(shrinker_rwsem){++++..}, at: [<ffffffff810bbd22>]
shrink_slab+0x32/0x1f0
Oct 29 20:30:01 thoregon kernel: [ 2844.972020]  #1:
(&type->s_umount_key#20){++++.+}, at: [<ffffffff810f5a8e>]
grab_super_passive+0x3e/0x90
Oct 29 20:30:01 thoregon kernel: [ 2844.972024]  #2:
(&pag->pag_ici_reclaim_lock){+.+...}, at: [<ffffffff811f263c>]
xfs_reclaim_inodes_ag+0xbc/0x4f0
Oct 29 20:30:01 thoregon kernel: [ 2844.972027]
Oct 29 20:30:01 thoregon kernel: [ 2844.972027] stack backtrace:
Oct 29 20:30:01 thoregon kernel: [ 2844.972029] Pid: 725, comm:
kswapd0 Not tainted 3.7.0-rc3 #1
Oct 29 20:30:01 thoregon kernel: [ 2844.972031] Call Trace:
Oct 29 20:30:01 thoregon kernel: [ 2844.972035]  [<ffffffff816a782c>]
print_usage_bug+0x1f5/0x206
Oct 29 20:30:01 thoregon kernel: [ 2844.972039]  [<ffffffff8100ed8a>]
? save_stack_trace+0x2a/0x50
Oct 29 20:30:01 thoregon kernel: [ 2844.972042]  [<ffffffff8107e9fd>]
mark_lock+0x28d/0x2f0
Oct 29 20:30:01 thoregon kernel: [ 2844.972044]  [<ffffffff8107de30>]
? print_irq_inversion_bug.part.37+0x1f0/0x1f0
Oct 29 20:30:01 thoregon kernel: [ 2844.972047]  [<ffffffff8107efdf>]
__lock_acquire+0x57f/0x1c00
Oct 29 20:30:01 thoregon kernel: [ 2844.972049]  [<ffffffff8107c899>]
? __lock_is_held+0x59/0x70
Oct 29 20:30:01 thoregon kernel: [ 2844.972051]  [<ffffffff81080b55>]
lock_acquire+0x55/0x70
Oct 29 20:30:01 thoregon kernel: [ 2844.972053]  [<ffffffff811e7ef4>]
? xfs_ilock+0x84/0xb0
Oct 29 20:30:01 thoregon kernel: [ 2844.972056]  [<ffffffff8106126a>]
down_write_nested+0x4a/0x70
Oct 29 20:30:01 thoregon kernel: [ 2844.972058]  [<ffffffff811e7ef4>]
? xfs_ilock+0x84/0xb0
Oct 29 20:30:01 thoregon kernel: [ 2844.972060]  [<ffffffff811e7ef4>]
xfs_ilock+0x84/0xb0
Oct 29 20:30:01 thoregon kernel: [ 2844.972063]  [<ffffffff811f1a76>]
xfs_reclaim_inode+0x136/0x340
Oct 29 20:30:01 thoregon kernel: [ 2844.972065]  [<ffffffff811f283f>]
xfs_reclaim_inodes_ag+0x2bf/0x4f0
Oct 29 20:30:01 thoregon kernel: [ 2844.972066]  [<ffffffff811f2660>]
? xfs_reclaim_inodes_ag+0xe0/0x4f0
Oct 29 20:30:01 thoregon kernel: [ 2844.972069]  [<ffffffff811f2bae>]
xfs_reclaim_inodes_nr+0x2e/0x40
Oct 29 20:30:01 thoregon kernel: [ 2844.972071]  [<ffffffff811ef480>]
xfs_fs_free_cached_objects+0x10/0x20
Oct 29 20:30:01 thoregon kernel: [ 2844.972073]  [<ffffffff810f5bf3>]
prune_super+0x113/0x1a0
Oct 29 20:30:01 thoregon kernel: [ 2844.972075]  [<ffffffff810bbe0e>]
shrink_slab+0x11e/0x1f0
Oct 29 20:30:01 thoregon kernel: [ 2844.972077]  [<ffffffff810be400>]
kswapd+0x690/0xa10
Oct 29 20:30:01 thoregon kernel: [ 2844.972080]  [<ffffffff8105ca30>]
? __init_waitqueue_head+0x60/0x60
Oct 29 20:30:01 thoregon kernel: [ 2844.972082]  [<ffffffff810bdd70>]
? shrink_lruvec+0x540/0x540
Oct 29 20:30:01 thoregon kernel: [ 2844.972084]  [<ffffffff8105c246>]
kthread+0xd6/0xe0
Oct 29 20:30:01 thoregon kernel: [ 2844.972087]  [<ffffffff816b148b>]
? _raw_spin_unlock_irq+0x2b/0x50
Oct 29 20:30:01 thoregon kernel: [ 2844.972089]  [<ffffffff8105c170>]
? flush_kthread_worker+0xe0/0xe0
Oct 29 20:30:01 thoregon kernel: [ 2844.972091]  [<ffffffff816b21ec>]
ret_from_fork+0x7c/0xb0
Oct 29 20:30:01 thoregon kernel: [ 2844.972093]  [<ffffffff8105c170>]
? flush_kthread_worker+0xe0/0xe0
Oct 29 20:30:01 thoregon cron[24374]: (root) CMD (test -x
/usr/sbin/run-crons && /usr/sbin/run-crons)

As kswapd got stuck yesterday, I think LOCKDEP found a real problem.

If you need more information, please ask. I will try to provide it.

Thanks,

Torsten

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Hang in XFS reclaim on 3.7.0-rc3
@ 2012-10-29 20:03 ` Torsten Kaiser
  0 siblings, 0 replies; 31+ messages in thread
From: Torsten Kaiser @ 2012-10-29 20:03 UTC (permalink / raw)
  To: xfs; +Cc: Linux Kernel

After experiencing a hang of all IO yesterday (
http://marc.info/?l=linux-kernel&m=135142236520624&w=2 ), I turned on
LOCKDEP after upgrading to -rc3.

I then tried to replicate the load that hung yesterday and got the
following lockdep report, implicating XFS instead of by stacking swap
onto dm-crypt and md.

Oct 29 20:27:11 thoregon kernel: [ 2675.571958] usb 7-2: USB
disconnect, device number 2
Oct 29 20:30:01 thoregon kernel: [ 2844.971913]
Oct 29 20:30:01 thoregon kernel: [ 2844.971920]
=================================
Oct 29 20:30:01 thoregon kernel: [ 2844.971921] [ INFO: inconsistent
lock state ]
Oct 29 20:30:01 thoregon kernel: [ 2844.971924] 3.7.0-rc3 #1 Not tainted
Oct 29 20:30:01 thoregon kernel: [ 2844.971925]
---------------------------------
Oct 29 20:30:01 thoregon kernel: [ 2844.971927] inconsistent
{RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
Oct 29 20:30:01 thoregon kernel: [ 2844.971929] kswapd0/725
[HC0[0]:SC0[0]:HE1:SE1] takes:
Oct 29 20:30:01 thoregon kernel: [ 2844.971931]
(&(&ip->i_lock)->mr_lock){++++?.}, at: [<ffffffff811e7ef4>]
xfs_ilock+0x84/0xb0
Oct 29 20:30:01 thoregon kernel: [ 2844.971941] {RECLAIM_FS-ON-W}
state was registered at:
Oct 29 20:30:01 thoregon kernel: [ 2844.971942]   [<ffffffff8108137e>]
mark_held_locks+0x7e/0x130
Oct 29 20:30:01 thoregon kernel: [ 2844.971947]   [<ffffffff81081a63>]
lockdep_trace_alloc+0x63/0xc0
Oct 29 20:30:01 thoregon kernel: [ 2844.971949]   [<ffffffff810e9dd5>]
kmem_cache_alloc+0x35/0xe0
Oct 29 20:30:01 thoregon kernel: [ 2844.971952]   [<ffffffff810dba31>]
vm_map_ram+0x271/0x770
Oct 29 20:30:01 thoregon kernel: [ 2844.971955]   [<ffffffff811e10a6>]
_xfs_buf_map_pages+0x46/0xe0
Oct 29 20:30:01 thoregon kernel: [ 2844.971959]   [<ffffffff811e1fba>]
xfs_buf_get_map+0x8a/0x130
Oct 29 20:30:01 thoregon kernel: [ 2844.971961]   [<ffffffff81233849>]
xfs_trans_get_buf_map+0xa9/0xd0
Oct 29 20:30:01 thoregon kernel: [ 2844.971964]   [<ffffffff8121e339>]
xfs_ifree_cluster+0x129/0x670
Oct 29 20:30:01 thoregon kernel: [ 2844.971967]   [<ffffffff8121f959>]
xfs_ifree+0xe9/0xf0
Oct 29 20:30:01 thoregon kernel: [ 2844.971969]   [<ffffffff811f4abf>]
xfs_inactive+0x2af/0x480
Oct 29 20:30:01 thoregon kernel: [ 2844.971972]   [<ffffffff811efb90>]
xfs_fs_evict_inode+0x70/0x80
Oct 29 20:30:01 thoregon kernel: [ 2844.971974]   [<ffffffff8110cb8f>]
evict+0xaf/0x1b0
Oct 29 20:30:01 thoregon kernel: [ 2844.971977]   [<ffffffff8110cd95>]
iput+0x105/0x210
Oct 29 20:30:01 thoregon kernel: [ 2844.971979]   [<ffffffff811070d0>]
dentry_iput+0xa0/0xe0
Oct 29 20:30:01 thoregon kernel: [ 2844.971981]   [<ffffffff81108310>]
dput+0x150/0x280
Oct 29 20:30:01 thoregon kernel: [ 2844.971983]   [<ffffffff811020fb>]
sys_renameat+0x21b/0x290
Oct 29 20:30:01 thoregon kernel: [ 2844.971986]   [<ffffffff81102186>]
sys_rename+0x16/0x20
Oct 29 20:30:01 thoregon kernel: [ 2844.971988]   [<ffffffff816b2292>]
system_call_fastpath+0x16/0x1b
Oct 29 20:30:01 thoregon kernel: [ 2844.971992] irq event stamp: 155377
Oct 29 20:30:01 thoregon kernel: [ 2844.971993] hardirqs last  enabled
at (155377): [<ffffffff816ae1ed>] mutex_trylock+0xfd/0x170
Oct 29 20:30:01 thoregon kernel: [ 2844.971997] hardirqs last disabled
at (155376): [<ffffffff816ae12e>] mutex_trylock+0x3e/0x170
Oct 29 20:30:01 thoregon kernel: [ 2844.971999] softirqs last  enabled
at (155368): [<ffffffff81042fb1>] __do_softirq+0x111/0x170
Oct 29 20:30:01 thoregon kernel: [ 2844.972002] softirqs last disabled
at (155353): [<ffffffff816b33bc>] call_softirq+0x1c/0x30
Oct 29 20:30:01 thoregon kernel: [ 2844.972004]
Oct 29 20:30:01 thoregon kernel: [ 2844.972004] other info that might
help us debug this:
Oct 29 20:30:01 thoregon kernel: [ 2844.972006]  Possible unsafe
locking scenario:
Oct 29 20:30:01 thoregon kernel: [ 2844.972006]
Oct 29 20:30:01 thoregon kernel: [ 2844.972007]        CPU0
Oct 29 20:30:01 thoregon kernel: [ 2844.972007]        ----
Oct 29 20:30:01 thoregon kernel: [ 2844.972008]   lock(&(&ip->i_lock)->mr_lock);
Oct 29 20:30:01 thoregon kernel: [ 2844.972009]   <Interrupt>
Oct 29 20:30:01 thoregon kernel: [ 2844.972010]
lock(&(&ip->i_lock)->mr_lock);
Oct 29 20:30:01 thoregon kernel: [ 2844.972012]
Oct 29 20:30:01 thoregon kernel: [ 2844.972012]  *** DEADLOCK ***
Oct 29 20:30:01 thoregon kernel: [ 2844.972012]
Oct 29 20:30:01 thoregon kernel: [ 2844.972013] 3 locks held by kswapd0/725:
Oct 29 20:30:01 thoregon kernel: [ 2844.972014]  #0:
(shrinker_rwsem){++++..}, at: [<ffffffff810bbd22>]
shrink_slab+0x32/0x1f0
Oct 29 20:30:01 thoregon kernel: [ 2844.972020]  #1:
(&type->s_umount_key#20){++++.+}, at: [<ffffffff810f5a8e>]
grab_super_passive+0x3e/0x90
Oct 29 20:30:01 thoregon kernel: [ 2844.972024]  #2:
(&pag->pag_ici_reclaim_lock){+.+...}, at: [<ffffffff811f263c>]
xfs_reclaim_inodes_ag+0xbc/0x4f0
Oct 29 20:30:01 thoregon kernel: [ 2844.972027]
Oct 29 20:30:01 thoregon kernel: [ 2844.972027] stack backtrace:
Oct 29 20:30:01 thoregon kernel: [ 2844.972029] Pid: 725, comm:
kswapd0 Not tainted 3.7.0-rc3 #1
Oct 29 20:30:01 thoregon kernel: [ 2844.972031] Call Trace:
Oct 29 20:30:01 thoregon kernel: [ 2844.972035]  [<ffffffff816a782c>]
print_usage_bug+0x1f5/0x206
Oct 29 20:30:01 thoregon kernel: [ 2844.972039]  [<ffffffff8100ed8a>]
? save_stack_trace+0x2a/0x50
Oct 29 20:30:01 thoregon kernel: [ 2844.972042]  [<ffffffff8107e9fd>]
mark_lock+0x28d/0x2f0
Oct 29 20:30:01 thoregon kernel: [ 2844.972044]  [<ffffffff8107de30>]
? print_irq_inversion_bug.part.37+0x1f0/0x1f0
Oct 29 20:30:01 thoregon kernel: [ 2844.972047]  [<ffffffff8107efdf>]
__lock_acquire+0x57f/0x1c00
Oct 29 20:30:01 thoregon kernel: [ 2844.972049]  [<ffffffff8107c899>]
? __lock_is_held+0x59/0x70
Oct 29 20:30:01 thoregon kernel: [ 2844.972051]  [<ffffffff81080b55>]
lock_acquire+0x55/0x70
Oct 29 20:30:01 thoregon kernel: [ 2844.972053]  [<ffffffff811e7ef4>]
? xfs_ilock+0x84/0xb0
Oct 29 20:30:01 thoregon kernel: [ 2844.972056]  [<ffffffff8106126a>]
down_write_nested+0x4a/0x70
Oct 29 20:30:01 thoregon kernel: [ 2844.972058]  [<ffffffff811e7ef4>]
? xfs_ilock+0x84/0xb0
Oct 29 20:30:01 thoregon kernel: [ 2844.972060]  [<ffffffff811e7ef4>]
xfs_ilock+0x84/0xb0
Oct 29 20:30:01 thoregon kernel: [ 2844.972063]  [<ffffffff811f1a76>]
xfs_reclaim_inode+0x136/0x340
Oct 29 20:30:01 thoregon kernel: [ 2844.972065]  [<ffffffff811f283f>]
xfs_reclaim_inodes_ag+0x2bf/0x4f0
Oct 29 20:30:01 thoregon kernel: [ 2844.972066]  [<ffffffff811f2660>]
? xfs_reclaim_inodes_ag+0xe0/0x4f0
Oct 29 20:30:01 thoregon kernel: [ 2844.972069]  [<ffffffff811f2bae>]
xfs_reclaim_inodes_nr+0x2e/0x40
Oct 29 20:30:01 thoregon kernel: [ 2844.972071]  [<ffffffff811ef480>]
xfs_fs_free_cached_objects+0x10/0x20
Oct 29 20:30:01 thoregon kernel: [ 2844.972073]  [<ffffffff810f5bf3>]
prune_super+0x113/0x1a0
Oct 29 20:30:01 thoregon kernel: [ 2844.972075]  [<ffffffff810bbe0e>]
shrink_slab+0x11e/0x1f0
Oct 29 20:30:01 thoregon kernel: [ 2844.972077]  [<ffffffff810be400>]
kswapd+0x690/0xa10
Oct 29 20:30:01 thoregon kernel: [ 2844.972080]  [<ffffffff8105ca30>]
? __init_waitqueue_head+0x60/0x60
Oct 29 20:30:01 thoregon kernel: [ 2844.972082]  [<ffffffff810bdd70>]
? shrink_lruvec+0x540/0x540
Oct 29 20:30:01 thoregon kernel: [ 2844.972084]  [<ffffffff8105c246>]
kthread+0xd6/0xe0
Oct 29 20:30:01 thoregon kernel: [ 2844.972087]  [<ffffffff816b148b>]
? _raw_spin_unlock_irq+0x2b/0x50
Oct 29 20:30:01 thoregon kernel: [ 2844.972089]  [<ffffffff8105c170>]
? flush_kthread_worker+0xe0/0xe0
Oct 29 20:30:01 thoregon kernel: [ 2844.972091]  [<ffffffff816b21ec>]
ret_from_fork+0x7c/0xb0
Oct 29 20:30:01 thoregon kernel: [ 2844.972093]  [<ffffffff8105c170>]
? flush_kthread_worker+0xe0/0xe0
Oct 29 20:30:01 thoregon cron[24374]: (root) CMD (test -x
/usr/sbin/run-crons && /usr/sbin/run-crons)

As kswapd got stuck yesterday, I think LOCKDEP found a real problem.

If you need more information, please ask. I will try to provide it.

Thanks,

Torsten

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
  2012-10-29 20:03 ` Torsten Kaiser
@ 2012-10-29 22:26   ` Dave Chinner
  -1 siblings, 0 replies; 31+ messages in thread
From: Dave Chinner @ 2012-10-29 22:26 UTC (permalink / raw)
  To: Torsten Kaiser; +Cc: xfs, Linux Kernel

On Mon, Oct 29, 2012 at 09:03:15PM +0100, Torsten Kaiser wrote:
> After experiencing a hang of all IO yesterday (
> http://marc.info/?l=linux-kernel&m=135142236520624&w=2 ), I turned on
> LOCKDEP after upgrading to -rc3.
> 
> I then tried to replicate the load that hung yesterday and got the
> following lockdep report, implicating XFS instead of by stacking swap
> onto dm-crypt and md.
> 
> [ 2844.971913]
> [ 2844.971920] =================================
> [ 2844.971921] [ INFO: inconsistent lock state ]
> [ 2844.971924] 3.7.0-rc3 #1 Not tainted
> [ 2844.971925] ---------------------------------
> [ 2844.971927] inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
> [ 2844.971929] kswapd0/725 [HC0[0]:SC0[0]:HE1:SE1] takes:
> [ 2844.971931] (&(&ip->i_lock)->mr_lock){++++?.}, at: [<ffffffff811e7ef4>] xfs_ilock+0x84/0xb0
> [ 2844.971941] {RECLAIM_FS-ON-W} state was registered at:
> [ 2844.971942]   [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
> [ 2844.971947]   [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
> [ 2844.971949]   [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
> [ 2844.971952]   [<ffffffff810dba31>] vm_map_ram+0x271/0x770
> [ 2844.971955]   [<ffffffff811e10a6>] _xfs_buf_map_pages+0x46/0xe0
> [ 2844.971959]   [<ffffffff811e1fba>] xfs_buf_get_map+0x8a/0x130
> [ 2844.971961]   [<ffffffff81233849>] xfs_trans_get_buf_map+0xa9/0xd0
> [ 2844.971964]   [<ffffffff8121e339>] xfs_ifree_cluster+0x129/0x670
> [ 2844.971967]   [<ffffffff8121f959>] xfs_ifree+0xe9/0xf0
> [ 2844.971969]   [<ffffffff811f4abf>] xfs_inactive+0x2af/0x480
> [ 2844.971972]   [<ffffffff811efb90>] xfs_fs_evict_inode+0x70/0x80
> [ 2844.971974]   [<ffffffff8110cb8f>] evict+0xaf/0x1b0
> [ 2844.971977]   [<ffffffff8110cd95>] iput+0x105/0x210
> [ 2844.971979]   [<ffffffff811070d0>] dentry_iput+0xa0/0xe0
> [ 2844.971981]   [<ffffffff81108310>] dput+0x150/0x280
> [ 2844.971983]   [<ffffffff811020fb>] sys_renameat+0x21b/0x290
> [ 2844.971986]   [<ffffffff81102186>] sys_rename+0x16/0x20
> [ 2844.971988]   [<ffffffff816b2292>] system_call_fastpath+0x16/0x1b

We shouldn't be mapping pages there. See if the patch below fixes
it.

Fundamentally, though, the lockdep warning has come about because
vm_map_ram is doing a GFP_KERNEL allocation when we need it to be
doing GFP_NOFS - we are within a transaction here, so memory reclaim
is not allowed to recurse back into the filesystem.

mm-folk: can we please get this vmalloc/gfp_flags passing API
fixed once and for all? This is the fourth time in the last month or
so that I've seen XFS bug reports with silent hangs and associated
lockdep output that implicate GFP_KERNEL allocations from vm_map_ram
in GFP_NOFS conditions as the potential cause....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

xfs: don't vmap inode cluster buffers during free

From: Dave Chinner <dchinner@redhat.com>

Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 fs/xfs/xfs_inode.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
index c4add46..82f6e5d 100644
--- a/fs/xfs/xfs_inode.c
+++ b/fs/xfs/xfs_inode.c
@@ -1781,7 +1781,8 @@ xfs_ifree_cluster(
 		 * to mark all the active inodes on the buffer stale.
 		 */
 		bp = xfs_trans_get_buf(tp, mp->m_ddev_targp, blkno,
-					mp->m_bsize * blks_per_cluster, 0);
+					mp->m_bsize * blks_per_cluster,
+					XBF_UNMAPPED);
 
 		if (!bp)
 			return ENOMEM;

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
@ 2012-10-29 22:26   ` Dave Chinner
  0 siblings, 0 replies; 31+ messages in thread
From: Dave Chinner @ 2012-10-29 22:26 UTC (permalink / raw)
  To: Torsten Kaiser; +Cc: Linux Kernel, xfs

On Mon, Oct 29, 2012 at 09:03:15PM +0100, Torsten Kaiser wrote:
> After experiencing a hang of all IO yesterday (
> http://marc.info/?l=linux-kernel&m=135142236520624&w=2 ), I turned on
> LOCKDEP after upgrading to -rc3.
> 
> I then tried to replicate the load that hung yesterday and got the
> following lockdep report, implicating XFS instead of by stacking swap
> onto dm-crypt and md.
> 
> [ 2844.971913]
> [ 2844.971920] =================================
> [ 2844.971921] [ INFO: inconsistent lock state ]
> [ 2844.971924] 3.7.0-rc3 #1 Not tainted
> [ 2844.971925] ---------------------------------
> [ 2844.971927] inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
> [ 2844.971929] kswapd0/725 [HC0[0]:SC0[0]:HE1:SE1] takes:
> [ 2844.971931] (&(&ip->i_lock)->mr_lock){++++?.}, at: [<ffffffff811e7ef4>] xfs_ilock+0x84/0xb0
> [ 2844.971941] {RECLAIM_FS-ON-W} state was registered at:
> [ 2844.971942]   [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
> [ 2844.971947]   [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
> [ 2844.971949]   [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
> [ 2844.971952]   [<ffffffff810dba31>] vm_map_ram+0x271/0x770
> [ 2844.971955]   [<ffffffff811e10a6>] _xfs_buf_map_pages+0x46/0xe0
> [ 2844.971959]   [<ffffffff811e1fba>] xfs_buf_get_map+0x8a/0x130
> [ 2844.971961]   [<ffffffff81233849>] xfs_trans_get_buf_map+0xa9/0xd0
> [ 2844.971964]   [<ffffffff8121e339>] xfs_ifree_cluster+0x129/0x670
> [ 2844.971967]   [<ffffffff8121f959>] xfs_ifree+0xe9/0xf0
> [ 2844.971969]   [<ffffffff811f4abf>] xfs_inactive+0x2af/0x480
> [ 2844.971972]   [<ffffffff811efb90>] xfs_fs_evict_inode+0x70/0x80
> [ 2844.971974]   [<ffffffff8110cb8f>] evict+0xaf/0x1b0
> [ 2844.971977]   [<ffffffff8110cd95>] iput+0x105/0x210
> [ 2844.971979]   [<ffffffff811070d0>] dentry_iput+0xa0/0xe0
> [ 2844.971981]   [<ffffffff81108310>] dput+0x150/0x280
> [ 2844.971983]   [<ffffffff811020fb>] sys_renameat+0x21b/0x290
> [ 2844.971986]   [<ffffffff81102186>] sys_rename+0x16/0x20
> [ 2844.971988]   [<ffffffff816b2292>] system_call_fastpath+0x16/0x1b

We shouldn't be mapping pages there. See if the patch below fixes
it.

Fundamentally, though, the lockdep warning has come about because
vm_map_ram is doing a GFP_KERNEL allocation when we need it to be
doing GFP_NOFS - we are within a transaction here, so memory reclaim
is not allowed to recurse back into the filesystem.

mm-folk: can we please get this vmalloc/gfp_flags passing API
fixed once and for all? This is the fourth time in the last month or
so that I've seen XFS bug reports with silent hangs and associated
lockdep output that implicate GFP_KERNEL allocations from vm_map_ram
in GFP_NOFS conditions as the potential cause....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

xfs: don't vmap inode cluster buffers during free

From: Dave Chinner <dchinner@redhat.com>

Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 fs/xfs/xfs_inode.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
index c4add46..82f6e5d 100644
--- a/fs/xfs/xfs_inode.c
+++ b/fs/xfs/xfs_inode.c
@@ -1781,7 +1781,8 @@ xfs_ifree_cluster(
 		 * to mark all the active inodes on the buffer stale.
 		 */
 		bp = xfs_trans_get_buf(tp, mp->m_ddev_targp, blkno,
-					mp->m_bsize * blks_per_cluster, 0);
+					mp->m_bsize * blks_per_cluster,
+					XBF_UNMAPPED);
 
 		if (!bp)
 			return ENOMEM;

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
  2012-10-29 22:26   ` Dave Chinner
  (?)
@ 2012-10-29 22:41     ` Dave Chinner
  -1 siblings, 0 replies; 31+ messages in thread
From: Dave Chinner @ 2012-10-29 22:41 UTC (permalink / raw)
  To: Torsten Kaiser; +Cc: xfs, Linux Kernel, linux-mm

[add the linux-mm cc I forgot to add before sending]

On Tue, Oct 30, 2012 at 09:26:13AM +1100, Dave Chinner wrote:
> On Mon, Oct 29, 2012 at 09:03:15PM +0100, Torsten Kaiser wrote:
> > After experiencing a hang of all IO yesterday (
> > http://marc.info/?l=linux-kernel&m=135142236520624&w=2 ), I turned on
> > LOCKDEP after upgrading to -rc3.
> > 
> > I then tried to replicate the load that hung yesterday and got the
> > following lockdep report, implicating XFS instead of by stacking swap
> > onto dm-crypt and md.
> > 
> > [ 2844.971913]
> > [ 2844.971920] =================================
> > [ 2844.971921] [ INFO: inconsistent lock state ]
> > [ 2844.971924] 3.7.0-rc3 #1 Not tainted
> > [ 2844.971925] ---------------------------------
> > [ 2844.971927] inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
> > [ 2844.971929] kswapd0/725 [HC0[0]:SC0[0]:HE1:SE1] takes:
> > [ 2844.971931] (&(&ip->i_lock)->mr_lock){++++?.}, at: [<ffffffff811e7ef4>] xfs_ilock+0x84/0xb0
> > [ 2844.971941] {RECLAIM_FS-ON-W} state was registered at:
> > [ 2844.971942]   [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
> > [ 2844.971947]   [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
> > [ 2844.971949]   [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
> > [ 2844.971952]   [<ffffffff810dba31>] vm_map_ram+0x271/0x770
> > [ 2844.971955]   [<ffffffff811e10a6>] _xfs_buf_map_pages+0x46/0xe0
> > [ 2844.971959]   [<ffffffff811e1fba>] xfs_buf_get_map+0x8a/0x130
> > [ 2844.971961]   [<ffffffff81233849>] xfs_trans_get_buf_map+0xa9/0xd0
> > [ 2844.971964]   [<ffffffff8121e339>] xfs_ifree_cluster+0x129/0x670
> > [ 2844.971967]   [<ffffffff8121f959>] xfs_ifree+0xe9/0xf0
> > [ 2844.971969]   [<ffffffff811f4abf>] xfs_inactive+0x2af/0x480
> > [ 2844.971972]   [<ffffffff811efb90>] xfs_fs_evict_inode+0x70/0x80
> > [ 2844.971974]   [<ffffffff8110cb8f>] evict+0xaf/0x1b0
> > [ 2844.971977]   [<ffffffff8110cd95>] iput+0x105/0x210
> > [ 2844.971979]   [<ffffffff811070d0>] dentry_iput+0xa0/0xe0
> > [ 2844.971981]   [<ffffffff81108310>] dput+0x150/0x280
> > [ 2844.971983]   [<ffffffff811020fb>] sys_renameat+0x21b/0x290
> > [ 2844.971986]   [<ffffffff81102186>] sys_rename+0x16/0x20
> > [ 2844.971988]   [<ffffffff816b2292>] system_call_fastpath+0x16/0x1b
> 
> We shouldn't be mapping pages there. See if the patch below fixes
> it.
> 
> Fundamentally, though, the lockdep warning has come about because
> vm_map_ram is doing a GFP_KERNEL allocation when we need it to be
> doing GFP_NOFS - we are within a transaction here, so memory reclaim
> is not allowed to recurse back into the filesystem.
> 
> mm-folk: can we please get this vmalloc/gfp_flags passing API
> fixed once and for all? This is the fourth time in the last month or
> so that I've seen XFS bug reports with silent hangs and associated
> lockdep output that implicate GFP_KERNEL allocations from vm_map_ram
> in GFP_NOFS conditions as the potential cause....
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> 
> xfs: don't vmap inode cluster buffers during free
> 
> From: Dave Chinner <dchinner@redhat.com>
> 
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
>  fs/xfs/xfs_inode.c |    3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
> index c4add46..82f6e5d 100644
> --- a/fs/xfs/xfs_inode.c
> +++ b/fs/xfs/xfs_inode.c
> @@ -1781,7 +1781,8 @@ xfs_ifree_cluster(
>  		 * to mark all the active inodes on the buffer stale.
>  		 */
>  		bp = xfs_trans_get_buf(tp, mp->m_ddev_targp, blkno,
> -					mp->m_bsize * blks_per_cluster, 0);
> +					mp->m_bsize * blks_per_cluster,
> +					XBF_UNMAPPED);
>  
>  		if (!bp)
>  			return ENOMEM;
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 
> -- 
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
> 
> 

-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
@ 2012-10-29 22:41     ` Dave Chinner
  0 siblings, 0 replies; 31+ messages in thread
From: Dave Chinner @ 2012-10-29 22:41 UTC (permalink / raw)
  To: Torsten Kaiser; +Cc: linux-mm, Linux Kernel, xfs

[add the linux-mm cc I forgot to add before sending]

On Tue, Oct 30, 2012 at 09:26:13AM +1100, Dave Chinner wrote:
> On Mon, Oct 29, 2012 at 09:03:15PM +0100, Torsten Kaiser wrote:
> > After experiencing a hang of all IO yesterday (
> > http://marc.info/?l=linux-kernel&m=135142236520624&w=2 ), I turned on
> > LOCKDEP after upgrading to -rc3.
> > 
> > I then tried to replicate the load that hung yesterday and got the
> > following lockdep report, implicating XFS instead of by stacking swap
> > onto dm-crypt and md.
> > 
> > [ 2844.971913]
> > [ 2844.971920] =================================
> > [ 2844.971921] [ INFO: inconsistent lock state ]
> > [ 2844.971924] 3.7.0-rc3 #1 Not tainted
> > [ 2844.971925] ---------------------------------
> > [ 2844.971927] inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
> > [ 2844.971929] kswapd0/725 [HC0[0]:SC0[0]:HE1:SE1] takes:
> > [ 2844.971931] (&(&ip->i_lock)->mr_lock){++++?.}, at: [<ffffffff811e7ef4>] xfs_ilock+0x84/0xb0
> > [ 2844.971941] {RECLAIM_FS-ON-W} state was registered at:
> > [ 2844.971942]   [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
> > [ 2844.971947]   [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
> > [ 2844.971949]   [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
> > [ 2844.971952]   [<ffffffff810dba31>] vm_map_ram+0x271/0x770
> > [ 2844.971955]   [<ffffffff811e10a6>] _xfs_buf_map_pages+0x46/0xe0
> > [ 2844.971959]   [<ffffffff811e1fba>] xfs_buf_get_map+0x8a/0x130
> > [ 2844.971961]   [<ffffffff81233849>] xfs_trans_get_buf_map+0xa9/0xd0
> > [ 2844.971964]   [<ffffffff8121e339>] xfs_ifree_cluster+0x129/0x670
> > [ 2844.971967]   [<ffffffff8121f959>] xfs_ifree+0xe9/0xf0
> > [ 2844.971969]   [<ffffffff811f4abf>] xfs_inactive+0x2af/0x480
> > [ 2844.971972]   [<ffffffff811efb90>] xfs_fs_evict_inode+0x70/0x80
> > [ 2844.971974]   [<ffffffff8110cb8f>] evict+0xaf/0x1b0
> > [ 2844.971977]   [<ffffffff8110cd95>] iput+0x105/0x210
> > [ 2844.971979]   [<ffffffff811070d0>] dentry_iput+0xa0/0xe0
> > [ 2844.971981]   [<ffffffff81108310>] dput+0x150/0x280
> > [ 2844.971983]   [<ffffffff811020fb>] sys_renameat+0x21b/0x290
> > [ 2844.971986]   [<ffffffff81102186>] sys_rename+0x16/0x20
> > [ 2844.971988]   [<ffffffff816b2292>] system_call_fastpath+0x16/0x1b
> 
> We shouldn't be mapping pages there. See if the patch below fixes
> it.
> 
> Fundamentally, though, the lockdep warning has come about because
> vm_map_ram is doing a GFP_KERNEL allocation when we need it to be
> doing GFP_NOFS - we are within a transaction here, so memory reclaim
> is not allowed to recurse back into the filesystem.
> 
> mm-folk: can we please get this vmalloc/gfp_flags passing API
> fixed once and for all? This is the fourth time in the last month or
> so that I've seen XFS bug reports with silent hangs and associated
> lockdep output that implicate GFP_KERNEL allocations from vm_map_ram
> in GFP_NOFS conditions as the potential cause....
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> 
> xfs: don't vmap inode cluster buffers during free
> 
> From: Dave Chinner <dchinner@redhat.com>
> 
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
>  fs/xfs/xfs_inode.c |    3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
> index c4add46..82f6e5d 100644
> --- a/fs/xfs/xfs_inode.c
> +++ b/fs/xfs/xfs_inode.c
> @@ -1781,7 +1781,8 @@ xfs_ifree_cluster(
>  		 * to mark all the active inodes on the buffer stale.
>  		 */
>  		bp = xfs_trans_get_buf(tp, mp->m_ddev_targp, blkno,
> -					mp->m_bsize * blks_per_cluster, 0);
> +					mp->m_bsize * blks_per_cluster,
> +					XBF_UNMAPPED);
>  
>  		if (!bp)
>  			return ENOMEM;
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 
> -- 
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
> 
> 

-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
@ 2012-10-29 22:41     ` Dave Chinner
  0 siblings, 0 replies; 31+ messages in thread
From: Dave Chinner @ 2012-10-29 22:41 UTC (permalink / raw)
  To: Torsten Kaiser; +Cc: xfs, Linux Kernel, linux-mm

[add the linux-mm cc I forgot to add before sending]

On Tue, Oct 30, 2012 at 09:26:13AM +1100, Dave Chinner wrote:
> On Mon, Oct 29, 2012 at 09:03:15PM +0100, Torsten Kaiser wrote:
> > After experiencing a hang of all IO yesterday (
> > http://marc.info/?l=linux-kernel&m=135142236520624&w=2 ), I turned on
> > LOCKDEP after upgrading to -rc3.
> > 
> > I then tried to replicate the load that hung yesterday and got the
> > following lockdep report, implicating XFS instead of by stacking swap
> > onto dm-crypt and md.
> > 
> > [ 2844.971913]
> > [ 2844.971920] =================================
> > [ 2844.971921] [ INFO: inconsistent lock state ]
> > [ 2844.971924] 3.7.0-rc3 #1 Not tainted
> > [ 2844.971925] ---------------------------------
> > [ 2844.971927] inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
> > [ 2844.971929] kswapd0/725 [HC0[0]:SC0[0]:HE1:SE1] takes:
> > [ 2844.971931] (&(&ip->i_lock)->mr_lock){++++?.}, at: [<ffffffff811e7ef4>] xfs_ilock+0x84/0xb0
> > [ 2844.971941] {RECLAIM_FS-ON-W} state was registered at:
> > [ 2844.971942]   [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
> > [ 2844.971947]   [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
> > [ 2844.971949]   [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
> > [ 2844.971952]   [<ffffffff810dba31>] vm_map_ram+0x271/0x770
> > [ 2844.971955]   [<ffffffff811e10a6>] _xfs_buf_map_pages+0x46/0xe0
> > [ 2844.971959]   [<ffffffff811e1fba>] xfs_buf_get_map+0x8a/0x130
> > [ 2844.971961]   [<ffffffff81233849>] xfs_trans_get_buf_map+0xa9/0xd0
> > [ 2844.971964]   [<ffffffff8121e339>] xfs_ifree_cluster+0x129/0x670
> > [ 2844.971967]   [<ffffffff8121f959>] xfs_ifree+0xe9/0xf0
> > [ 2844.971969]   [<ffffffff811f4abf>] xfs_inactive+0x2af/0x480
> > [ 2844.971972]   [<ffffffff811efb90>] xfs_fs_evict_inode+0x70/0x80
> > [ 2844.971974]   [<ffffffff8110cb8f>] evict+0xaf/0x1b0
> > [ 2844.971977]   [<ffffffff8110cd95>] iput+0x105/0x210
> > [ 2844.971979]   [<ffffffff811070d0>] dentry_iput+0xa0/0xe0
> > [ 2844.971981]   [<ffffffff81108310>] dput+0x150/0x280
> > [ 2844.971983]   [<ffffffff811020fb>] sys_renameat+0x21b/0x290
> > [ 2844.971986]   [<ffffffff81102186>] sys_rename+0x16/0x20
> > [ 2844.971988]   [<ffffffff816b2292>] system_call_fastpath+0x16/0x1b
> 
> We shouldn't be mapping pages there. See if the patch below fixes
> it.
> 
> Fundamentally, though, the lockdep warning has come about because
> vm_map_ram is doing a GFP_KERNEL allocation when we need it to be
> doing GFP_NOFS - we are within a transaction here, so memory reclaim
> is not allowed to recurse back into the filesystem.
> 
> mm-folk: can we please get this vmalloc/gfp_flags passing API
> fixed once and for all? This is the fourth time in the last month or
> so that I've seen XFS bug reports with silent hangs and associated
> lockdep output that implicate GFP_KERNEL allocations from vm_map_ram
> in GFP_NOFS conditions as the potential cause....
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> 
> xfs: don't vmap inode cluster buffers during free
> 
> From: Dave Chinner <dchinner@redhat.com>
> 
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
>  fs/xfs/xfs_inode.c |    3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
> index c4add46..82f6e5d 100644
> --- a/fs/xfs/xfs_inode.c
> +++ b/fs/xfs/xfs_inode.c
> @@ -1781,7 +1781,8 @@ xfs_ifree_cluster(
>  		 * to mark all the active inodes on the buffer stale.
>  		 */
>  		bp = xfs_trans_get_buf(tp, mp->m_ddev_targp, blkno,
> -					mp->m_bsize * blks_per_cluster, 0);
> +					mp->m_bsize * blks_per_cluster,
> +					XBF_UNMAPPED);
>  
>  		if (!bp)
>  			return ENOMEM;
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 
> -- 
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
> 
> 

-- 
Dave Chinner
david@fromorbit.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
  2012-10-29 22:26   ` Dave Chinner
@ 2012-10-30 20:37     ` Torsten Kaiser
  -1 siblings, 0 replies; 31+ messages in thread
From: Torsten Kaiser @ 2012-10-30 20:37 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs, Linux Kernel

On Mon, Oct 29, 2012 at 11:26 PM, Dave Chinner <david@fromorbit.com> wrote:
> On Mon, Oct 29, 2012 at 09:03:15PM +0100, Torsten Kaiser wrote:
>> After experiencing a hang of all IO yesterday (
>> http://marc.info/?l=linux-kernel&m=135142236520624&w=2 ), I turned on
>> LOCKDEP after upgrading to -rc3.
>>
>> I then tried to replicate the load that hung yesterday and got the
>> following lockdep report, implicating XFS instead of by stacking swap
>> onto dm-crypt and md.
>>
>> [ 2844.971913]
>> [ 2844.971920] =================================
>> [ 2844.971921] [ INFO: inconsistent lock state ]
>> [ 2844.971924] 3.7.0-rc3 #1 Not tainted
>> [ 2844.971925] ---------------------------------
>> [ 2844.971927] inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
>> [ 2844.971929] kswapd0/725 [HC0[0]:SC0[0]:HE1:SE1] takes:
>> [ 2844.971931] (&(&ip->i_lock)->mr_lock){++++?.}, at: [<ffffffff811e7ef4>] xfs_ilock+0x84/0xb0
>> [ 2844.971941] {RECLAIM_FS-ON-W} state was registered at:
>> [ 2844.971942]   [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
>> [ 2844.971947]   [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
>> [ 2844.971949]   [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
>> [ 2844.971952]   [<ffffffff810dba31>] vm_map_ram+0x271/0x770
>> [ 2844.971955]   [<ffffffff811e10a6>] _xfs_buf_map_pages+0x46/0xe0
>> [ 2844.971959]   [<ffffffff811e1fba>] xfs_buf_get_map+0x8a/0x130
>> [ 2844.971961]   [<ffffffff81233849>] xfs_trans_get_buf_map+0xa9/0xd0
>> [ 2844.971964]   [<ffffffff8121e339>] xfs_ifree_cluster+0x129/0x670
>> [ 2844.971967]   [<ffffffff8121f959>] xfs_ifree+0xe9/0xf0
>> [ 2844.971969]   [<ffffffff811f4abf>] xfs_inactive+0x2af/0x480
>> [ 2844.971972]   [<ffffffff811efb90>] xfs_fs_evict_inode+0x70/0x80
>> [ 2844.971974]   [<ffffffff8110cb8f>] evict+0xaf/0x1b0
>> [ 2844.971977]   [<ffffffff8110cd95>] iput+0x105/0x210
>> [ 2844.971979]   [<ffffffff811070d0>] dentry_iput+0xa0/0xe0
>> [ 2844.971981]   [<ffffffff81108310>] dput+0x150/0x280
>> [ 2844.971983]   [<ffffffff811020fb>] sys_renameat+0x21b/0x290
>> [ 2844.971986]   [<ffffffff81102186>] sys_rename+0x16/0x20
>> [ 2844.971988]   [<ffffffff816b2292>] system_call_fastpath+0x16/0x1b
>
> We shouldn't be mapping pages there. See if the patch below fixes
> it.

Applying your fix and rerunning my test workload did not trigger this
or any other LOCKDEP reports.
While I'm not 100% sure about my test case always hitting this, your
description makes me quite confident, that it really fixed this issue.

I will keep LOCKDEP enabled on that system, and if there really is
another splat, I will report back here. But I rather doubt that this
will be needed.

Thanks for the very quick fix!

Torsten

> Fundamentally, though, the lockdep warning has come about because
> vm_map_ram is doing a GFP_KERNEL allocation when we need it to be
> doing GFP_NOFS - we are within a transaction here, so memory reclaim
> is not allowed to recurse back into the filesystem.
>
> mm-folk: can we please get this vmalloc/gfp_flags passing API
> fixed once and for all? This is the fourth time in the last month or
> so that I've seen XFS bug reports with silent hangs and associated
> lockdep output that implicate GFP_KERNEL allocations from vm_map_ram
> in GFP_NOFS conditions as the potential cause....
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
>
> xfs: don't vmap inode cluster buffers during free
>
> From: Dave Chinner <dchinner@redhat.com>
>
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
>  fs/xfs/xfs_inode.c |    3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
> index c4add46..82f6e5d 100644
> --- a/fs/xfs/xfs_inode.c
> +++ b/fs/xfs/xfs_inode.c
> @@ -1781,7 +1781,8 @@ xfs_ifree_cluster(
>                  * to mark all the active inodes on the buffer stale.
>                  */
>                 bp = xfs_trans_get_buf(tp, mp->m_ddev_targp, blkno,
> -                                       mp->m_bsize * blks_per_cluster, 0);
> +                                       mp->m_bsize * blks_per_cluster,
> +                                       XBF_UNMAPPED);
>
>                 if (!bp)
>                         return ENOMEM;

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
@ 2012-10-30 20:37     ` Torsten Kaiser
  0 siblings, 0 replies; 31+ messages in thread
From: Torsten Kaiser @ 2012-10-30 20:37 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Linux Kernel, xfs

On Mon, Oct 29, 2012 at 11:26 PM, Dave Chinner <david@fromorbit.com> wrote:
> On Mon, Oct 29, 2012 at 09:03:15PM +0100, Torsten Kaiser wrote:
>> After experiencing a hang of all IO yesterday (
>> http://marc.info/?l=linux-kernel&m=135142236520624&w=2 ), I turned on
>> LOCKDEP after upgrading to -rc3.
>>
>> I then tried to replicate the load that hung yesterday and got the
>> following lockdep report, implicating XFS instead of by stacking swap
>> onto dm-crypt and md.
>>
>> [ 2844.971913]
>> [ 2844.971920] =================================
>> [ 2844.971921] [ INFO: inconsistent lock state ]
>> [ 2844.971924] 3.7.0-rc3 #1 Not tainted
>> [ 2844.971925] ---------------------------------
>> [ 2844.971927] inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
>> [ 2844.971929] kswapd0/725 [HC0[0]:SC0[0]:HE1:SE1] takes:
>> [ 2844.971931] (&(&ip->i_lock)->mr_lock){++++?.}, at: [<ffffffff811e7ef4>] xfs_ilock+0x84/0xb0
>> [ 2844.971941] {RECLAIM_FS-ON-W} state was registered at:
>> [ 2844.971942]   [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
>> [ 2844.971947]   [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
>> [ 2844.971949]   [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
>> [ 2844.971952]   [<ffffffff810dba31>] vm_map_ram+0x271/0x770
>> [ 2844.971955]   [<ffffffff811e10a6>] _xfs_buf_map_pages+0x46/0xe0
>> [ 2844.971959]   [<ffffffff811e1fba>] xfs_buf_get_map+0x8a/0x130
>> [ 2844.971961]   [<ffffffff81233849>] xfs_trans_get_buf_map+0xa9/0xd0
>> [ 2844.971964]   [<ffffffff8121e339>] xfs_ifree_cluster+0x129/0x670
>> [ 2844.971967]   [<ffffffff8121f959>] xfs_ifree+0xe9/0xf0
>> [ 2844.971969]   [<ffffffff811f4abf>] xfs_inactive+0x2af/0x480
>> [ 2844.971972]   [<ffffffff811efb90>] xfs_fs_evict_inode+0x70/0x80
>> [ 2844.971974]   [<ffffffff8110cb8f>] evict+0xaf/0x1b0
>> [ 2844.971977]   [<ffffffff8110cd95>] iput+0x105/0x210
>> [ 2844.971979]   [<ffffffff811070d0>] dentry_iput+0xa0/0xe0
>> [ 2844.971981]   [<ffffffff81108310>] dput+0x150/0x280
>> [ 2844.971983]   [<ffffffff811020fb>] sys_renameat+0x21b/0x290
>> [ 2844.971986]   [<ffffffff81102186>] sys_rename+0x16/0x20
>> [ 2844.971988]   [<ffffffff816b2292>] system_call_fastpath+0x16/0x1b
>
> We shouldn't be mapping pages there. See if the patch below fixes
> it.

Applying your fix and rerunning my test workload did not trigger this
or any other LOCKDEP reports.
While I'm not 100% sure about my test case always hitting this, your
description makes me quite confident, that it really fixed this issue.

I will keep LOCKDEP enabled on that system, and if there really is
another splat, I will report back here. But I rather doubt that this
will be needed.

Thanks for the very quick fix!

Torsten

> Fundamentally, though, the lockdep warning has come about because
> vm_map_ram is doing a GFP_KERNEL allocation when we need it to be
> doing GFP_NOFS - we are within a transaction here, so memory reclaim
> is not allowed to recurse back into the filesystem.
>
> mm-folk: can we please get this vmalloc/gfp_flags passing API
> fixed once and for all? This is the fourth time in the last month or
> so that I've seen XFS bug reports with silent hangs and associated
> lockdep output that implicate GFP_KERNEL allocations from vm_map_ram
> in GFP_NOFS conditions as the potential cause....
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
>
> xfs: don't vmap inode cluster buffers during free
>
> From: Dave Chinner <dchinner@redhat.com>
>
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
>  fs/xfs/xfs_inode.c |    3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
> index c4add46..82f6e5d 100644
> --- a/fs/xfs/xfs_inode.c
> +++ b/fs/xfs/xfs_inode.c
> @@ -1781,7 +1781,8 @@ xfs_ifree_cluster(
>                  * to mark all the active inodes on the buffer stale.
>                  */
>                 bp = xfs_trans_get_buf(tp, mp->m_ddev_targp, blkno,
> -                                       mp->m_bsize * blks_per_cluster, 0);
> +                                       mp->m_bsize * blks_per_cluster,
> +                                       XBF_UNMAPPED);
>
>                 if (!bp)
>                         return ENOMEM;

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
  2012-10-30 20:37     ` Torsten Kaiser
@ 2012-10-30 20:46       ` Christoph Hellwig
  -1 siblings, 0 replies; 31+ messages in thread
From: Christoph Hellwig @ 2012-10-30 20:46 UTC (permalink / raw)
  To: Torsten Kaiser; +Cc: Dave Chinner, Linux Kernel, xfs

On Tue, Oct 30, 2012 at 09:37:06PM +0100, Torsten Kaiser wrote:
> On Mon, Oct 29, 2012 at 11:26 PM, Dave Chinner <david@fromorbit.com> wrote:

For some reason I only managed to get the two mails from Torsten into my
xfs list folder, but the one from Dave is missing.  I did see Dave's
mail in my linux-mm folder, though.

> > From: Dave Chinner <dchinner@redhat.com>
> >
> > Signed-off-by: Dave Chinner <dchinner@redhat.com>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

And I agree that vmap needs to be fixed to pass through the gfp flags
ASAP.


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
@ 2012-10-30 20:46       ` Christoph Hellwig
  0 siblings, 0 replies; 31+ messages in thread
From: Christoph Hellwig @ 2012-10-30 20:46 UTC (permalink / raw)
  To: Torsten Kaiser; +Cc: Linux Kernel, xfs

On Tue, Oct 30, 2012 at 09:37:06PM +0100, Torsten Kaiser wrote:
> On Mon, Oct 29, 2012 at 11:26 PM, Dave Chinner <david@fromorbit.com> wrote:

For some reason I only managed to get the two mails from Torsten into my
xfs list folder, but the one from Dave is missing.  I did see Dave's
mail in my linux-mm folder, though.

> > From: Dave Chinner <dchinner@redhat.com>
> >
> > Signed-off-by: Dave Chinner <dchinner@redhat.com>

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

And I agree that vmap needs to be fixed to pass through the gfp flags
ASAP.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
  2012-10-29 22:26   ` Dave Chinner
@ 2012-11-01 21:30     ` Ben Myers
  -1 siblings, 0 replies; 31+ messages in thread
From: Ben Myers @ 2012-11-01 21:30 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Torsten Kaiser, Linux Kernel, xfs

Hi Dave,

On Tue, Oct 30, 2012 at 09:26:13AM +1100, Dave Chinner wrote:
> On Mon, Oct 29, 2012 at 09:03:15PM +0100, Torsten Kaiser wrote:
> > After experiencing a hang of all IO yesterday (
> > http://marc.info/?l=linux-kernel&m=135142236520624&w=2 ), I turned on
> > LOCKDEP after upgrading to -rc3.
> > 
> > I then tried to replicate the load that hung yesterday and got the
> > following lockdep report, implicating XFS instead of by stacking swap
> > onto dm-crypt and md.
> > 
> > [ 2844.971913]
> > [ 2844.971920] =================================
> > [ 2844.971921] [ INFO: inconsistent lock state ]
> > [ 2844.971924] 3.7.0-rc3 #1 Not tainted
> > [ 2844.971925] ---------------------------------
> > [ 2844.971927] inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
> > [ 2844.971929] kswapd0/725 [HC0[0]:SC0[0]:HE1:SE1] takes:
> > [ 2844.971931] (&(&ip->i_lock)->mr_lock){++++?.}, at: [<ffffffff811e7ef4>] xfs_ilock+0x84/0xb0
> > [ 2844.971941] {RECLAIM_FS-ON-W} state was registered at:
> > [ 2844.971942]   [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
> > [ 2844.971947]   [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
> > [ 2844.971949]   [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
> > [ 2844.971952]   [<ffffffff810dba31>] vm_map_ram+0x271/0x770
> > [ 2844.971955]   [<ffffffff811e10a6>] _xfs_buf_map_pages+0x46/0xe0
> > [ 2844.971959]   [<ffffffff811e1fba>] xfs_buf_get_map+0x8a/0x130
> > [ 2844.971961]   [<ffffffff81233849>] xfs_trans_get_buf_map+0xa9/0xd0
> > [ 2844.971964]   [<ffffffff8121e339>] xfs_ifree_cluster+0x129/0x670
> > [ 2844.971967]   [<ffffffff8121f959>] xfs_ifree+0xe9/0xf0
> > [ 2844.971969]   [<ffffffff811f4abf>] xfs_inactive+0x2af/0x480
> > [ 2844.971972]   [<ffffffff811efb90>] xfs_fs_evict_inode+0x70/0x80
> > [ 2844.971974]   [<ffffffff8110cb8f>] evict+0xaf/0x1b0
> > [ 2844.971977]   [<ffffffff8110cd95>] iput+0x105/0x210
> > [ 2844.971979]   [<ffffffff811070d0>] dentry_iput+0xa0/0xe0
> > [ 2844.971981]   [<ffffffff81108310>] dput+0x150/0x280
> > [ 2844.971983]   [<ffffffff811020fb>] sys_renameat+0x21b/0x290
> > [ 2844.971986]   [<ffffffff81102186>] sys_rename+0x16/0x20
> > [ 2844.971988]   [<ffffffff816b2292>] system_call_fastpath+0x16/0x1b
> 
> We shouldn't be mapping pages there. See if the patch below fixes
> it.
> 
> Fundamentally, though, the lockdep warning has come about because
> vm_map_ram is doing a GFP_KERNEL allocation when we need it to be
> doing GFP_NOFS - we are within a transaction here, so memory reclaim
> is not allowed to recurse back into the filesystem.
> 
> mm-folk: can we please get this vmalloc/gfp_flags passing API
> fixed once and for all? This is the fourth time in the last month or
> so that I've seen XFS bug reports with silent hangs and associated
> lockdep output that implicate GFP_KERNEL allocations from vm_map_ram
> in GFP_NOFS conditions as the potential cause....
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> 
> xfs: don't vmap inode cluster buffers during free

Could you write up a little more background for the commit message?

Regards,
	Ben

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
@ 2012-11-01 21:30     ` Ben Myers
  0 siblings, 0 replies; 31+ messages in thread
From: Ben Myers @ 2012-11-01 21:30 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Linux Kernel, Torsten Kaiser, xfs

Hi Dave,

On Tue, Oct 30, 2012 at 09:26:13AM +1100, Dave Chinner wrote:
> On Mon, Oct 29, 2012 at 09:03:15PM +0100, Torsten Kaiser wrote:
> > After experiencing a hang of all IO yesterday (
> > http://marc.info/?l=linux-kernel&m=135142236520624&w=2 ), I turned on
> > LOCKDEP after upgrading to -rc3.
> > 
> > I then tried to replicate the load that hung yesterday and got the
> > following lockdep report, implicating XFS instead of by stacking swap
> > onto dm-crypt and md.
> > 
> > [ 2844.971913]
> > [ 2844.971920] =================================
> > [ 2844.971921] [ INFO: inconsistent lock state ]
> > [ 2844.971924] 3.7.0-rc3 #1 Not tainted
> > [ 2844.971925] ---------------------------------
> > [ 2844.971927] inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
> > [ 2844.971929] kswapd0/725 [HC0[0]:SC0[0]:HE1:SE1] takes:
> > [ 2844.971931] (&(&ip->i_lock)->mr_lock){++++?.}, at: [<ffffffff811e7ef4>] xfs_ilock+0x84/0xb0
> > [ 2844.971941] {RECLAIM_FS-ON-W} state was registered at:
> > [ 2844.971942]   [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
> > [ 2844.971947]   [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
> > [ 2844.971949]   [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
> > [ 2844.971952]   [<ffffffff810dba31>] vm_map_ram+0x271/0x770
> > [ 2844.971955]   [<ffffffff811e10a6>] _xfs_buf_map_pages+0x46/0xe0
> > [ 2844.971959]   [<ffffffff811e1fba>] xfs_buf_get_map+0x8a/0x130
> > [ 2844.971961]   [<ffffffff81233849>] xfs_trans_get_buf_map+0xa9/0xd0
> > [ 2844.971964]   [<ffffffff8121e339>] xfs_ifree_cluster+0x129/0x670
> > [ 2844.971967]   [<ffffffff8121f959>] xfs_ifree+0xe9/0xf0
> > [ 2844.971969]   [<ffffffff811f4abf>] xfs_inactive+0x2af/0x480
> > [ 2844.971972]   [<ffffffff811efb90>] xfs_fs_evict_inode+0x70/0x80
> > [ 2844.971974]   [<ffffffff8110cb8f>] evict+0xaf/0x1b0
> > [ 2844.971977]   [<ffffffff8110cd95>] iput+0x105/0x210
> > [ 2844.971979]   [<ffffffff811070d0>] dentry_iput+0xa0/0xe0
> > [ 2844.971981]   [<ffffffff81108310>] dput+0x150/0x280
> > [ 2844.971983]   [<ffffffff811020fb>] sys_renameat+0x21b/0x290
> > [ 2844.971986]   [<ffffffff81102186>] sys_rename+0x16/0x20
> > [ 2844.971988]   [<ffffffff816b2292>] system_call_fastpath+0x16/0x1b
> 
> We shouldn't be mapping pages there. See if the patch below fixes
> it.
> 
> Fundamentally, though, the lockdep warning has come about because
> vm_map_ram is doing a GFP_KERNEL allocation when we need it to be
> doing GFP_NOFS - we are within a transaction here, so memory reclaim
> is not allowed to recurse back into the filesystem.
> 
> mm-folk: can we please get this vmalloc/gfp_flags passing API
> fixed once and for all? This is the fourth time in the last month or
> so that I've seen XFS bug reports with silent hangs and associated
> lockdep output that implicate GFP_KERNEL allocations from vm_map_ram
> in GFP_NOFS conditions as the potential cause....
> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> 
> xfs: don't vmap inode cluster buffers during free

Could you write up a little more background for the commit message?

Regards,
	Ben

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
  2012-11-01 21:30     ` Ben Myers
@ 2012-11-01 22:32       ` Dave Chinner
  -1 siblings, 0 replies; 31+ messages in thread
From: Dave Chinner @ 2012-11-01 22:32 UTC (permalink / raw)
  To: Ben Myers; +Cc: Torsten Kaiser, Linux Kernel, xfs

On Thu, Nov 01, 2012 at 04:30:10PM -0500, Ben Myers wrote:
> Hi Dave,
> 
> On Tue, Oct 30, 2012 at 09:26:13AM +1100, Dave Chinner wrote:
> > On Mon, Oct 29, 2012 at 09:03:15PM +0100, Torsten Kaiser wrote:
> > > After experiencing a hang of all IO yesterday (
> > > http://marc.info/?l=linux-kernel&m=135142236520624&w=2 ), I turned on
> > > LOCKDEP after upgrading to -rc3.
> > > 
> > > I then tried to replicate the load that hung yesterday and got the
> > > following lockdep report, implicating XFS instead of by stacking swap
> > > onto dm-crypt and md.
> > > 
> > > [ 2844.971913]
> > > [ 2844.971920] =================================
> > > [ 2844.971921] [ INFO: inconsistent lock state ]
> > > [ 2844.971924] 3.7.0-rc3 #1 Not tainted
> > > [ 2844.971925] ---------------------------------
> > > [ 2844.971927] inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
> > > [ 2844.971929] kswapd0/725 [HC0[0]:SC0[0]:HE1:SE1] takes:
> > > [ 2844.971931] (&(&ip->i_lock)->mr_lock){++++?.}, at: [<ffffffff811e7ef4>] xfs_ilock+0x84/0xb0
> > > [ 2844.971941] {RECLAIM_FS-ON-W} state was registered at:
> > > [ 2844.971942]   [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
> > > [ 2844.971947]   [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
> > > [ 2844.971949]   [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
> > > [ 2844.971952]   [<ffffffff810dba31>] vm_map_ram+0x271/0x770
> > > [ 2844.971955]   [<ffffffff811e10a6>] _xfs_buf_map_pages+0x46/0xe0
.....
> > We shouldn't be mapping pages there. See if the patch below fixes
> > it.
> > 
> > Fundamentally, though, the lockdep warning has come about because
> > vm_map_ram is doing a GFP_KERNEL allocation when we need it to be
> > doing GFP_NOFS - we are within a transaction here, so memory reclaim
> > is not allowed to recurse back into the filesystem.
> > 
> > mm-folk: can we please get this vmalloc/gfp_flags passing API
> > fixed once and for all? This is the fourth time in the last month or
> > so that I've seen XFS bug reports with silent hangs and associated
> > lockdep output that implicate GFP_KERNEL allocations from vm_map_ram
> > in GFP_NOFS conditions as the potential cause....
> > 
> > xfs: don't vmap inode cluster buffers during free
> 
> Could you write up a little more background for the commit message?

Sure, that was just a test patch and often I don't bother putting a
detailed description in them until I know they fix the problem. My
current tree has:

xfs: don't vmap inode cluster buffers during free

Inode buffers do not need to be mapped as inodes are read or written
directly from/to the pages underlying the buffer. This fixes a
regression introduced by commit 611c994 ("xfs: make XBF_MAPPED the
default behaviour").

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
@ 2012-11-01 22:32       ` Dave Chinner
  0 siblings, 0 replies; 31+ messages in thread
From: Dave Chinner @ 2012-11-01 22:32 UTC (permalink / raw)
  To: Ben Myers; +Cc: Linux Kernel, Torsten Kaiser, xfs

On Thu, Nov 01, 2012 at 04:30:10PM -0500, Ben Myers wrote:
> Hi Dave,
> 
> On Tue, Oct 30, 2012 at 09:26:13AM +1100, Dave Chinner wrote:
> > On Mon, Oct 29, 2012 at 09:03:15PM +0100, Torsten Kaiser wrote:
> > > After experiencing a hang of all IO yesterday (
> > > http://marc.info/?l=linux-kernel&m=135142236520624&w=2 ), I turned on
> > > LOCKDEP after upgrading to -rc3.
> > > 
> > > I then tried to replicate the load that hung yesterday and got the
> > > following lockdep report, implicating XFS instead of by stacking swap
> > > onto dm-crypt and md.
> > > 
> > > [ 2844.971913]
> > > [ 2844.971920] =================================
> > > [ 2844.971921] [ INFO: inconsistent lock state ]
> > > [ 2844.971924] 3.7.0-rc3 #1 Not tainted
> > > [ 2844.971925] ---------------------------------
> > > [ 2844.971927] inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
> > > [ 2844.971929] kswapd0/725 [HC0[0]:SC0[0]:HE1:SE1] takes:
> > > [ 2844.971931] (&(&ip->i_lock)->mr_lock){++++?.}, at: [<ffffffff811e7ef4>] xfs_ilock+0x84/0xb0
> > > [ 2844.971941] {RECLAIM_FS-ON-W} state was registered at:
> > > [ 2844.971942]   [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
> > > [ 2844.971947]   [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
> > > [ 2844.971949]   [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
> > > [ 2844.971952]   [<ffffffff810dba31>] vm_map_ram+0x271/0x770
> > > [ 2844.971955]   [<ffffffff811e10a6>] _xfs_buf_map_pages+0x46/0xe0
.....
> > We shouldn't be mapping pages there. See if the patch below fixes
> > it.
> > 
> > Fundamentally, though, the lockdep warning has come about because
> > vm_map_ram is doing a GFP_KERNEL allocation when we need it to be
> > doing GFP_NOFS - we are within a transaction here, so memory reclaim
> > is not allowed to recurse back into the filesystem.
> > 
> > mm-folk: can we please get this vmalloc/gfp_flags passing API
> > fixed once and for all? This is the fourth time in the last month or
> > so that I've seen XFS bug reports with silent hangs and associated
> > lockdep output that implicate GFP_KERNEL allocations from vm_map_ram
> > in GFP_NOFS conditions as the potential cause....
> > 
> > xfs: don't vmap inode cluster buffers during free
> 
> Could you write up a little more background for the commit message?

Sure, that was just a test patch and often I don't bother putting a
detailed description in them until I know they fix the problem. My
current tree has:

xfs: don't vmap inode cluster buffers during free

Inode buffers do not need to be mapped as inodes are read or written
directly from/to the pages underlying the buffer. This fixes a
regression introduced by commit 611c994 ("xfs: make XBF_MAPPED the
default behaviour").

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
  2012-10-30 20:37     ` Torsten Kaiser
@ 2012-11-18 10:24       ` Torsten Kaiser
  -1 siblings, 0 replies; 31+ messages in thread
From: Torsten Kaiser @ 2012-11-18 10:24 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs, Linux Kernel

On Tue, Oct 30, 2012 at 9:37 PM, Torsten Kaiser
<just.for.lkml@googlemail.com> wrote:
> I will keep LOCKDEP enabled on that system, and if there really is
> another splat, I will report back here. But I rather doubt that this
> will be needed.

After the patch, I did not see this problem again, but today I found
another LOCKDEP report that also looks XFS related.
I found it twice in the logs, and as both were slightly different, I
will attach both versions.


Nov  6 21:57:09 thoregon kernel: [ 9941.104345]
Nov  6 21:57:09 thoregon kernel: [ 9941.104350]
=================================
Nov  6 21:57:09 thoregon kernel: [ 9941.104351] [ INFO: inconsistent
lock state ]
Nov  6 21:57:09 thoregon kernel: [ 9941.104353] 3.7.0-rc4 #1 Not tainted
Nov  6 21:57:09 thoregon kernel: [ 9941.104354]
---------------------------------
Nov  6 21:57:09 thoregon kernel: [ 9941.104355] inconsistent
{RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
Nov  6 21:57:09 thoregon kernel: [ 9941.104357] kswapd0/725
[HC0[0]:SC0[0]:HE1:SE1] takes:
Nov  6 21:57:09 thoregon kernel: [ 9941.104359]
(&(&ip->i_lock)->mr_lock){++++?.}, at: [<ffffffff811e8164>]
xfs_ilock+0x84/0xb0
Nov  6 21:57:09 thoregon kernel: [ 9941.104366] {RECLAIM_FS-ON-W}
state was registered at:
Nov  6 21:57:09 thoregon kernel: [ 9941.104367]   [<ffffffff8108137e>]
mark_held_locks+0x7e/0x130
Nov  6 21:57:09 thoregon kernel: [ 9941.104371]   [<ffffffff81081a63>]
lockdep_trace_alloc+0x63/0xc0
Nov  6 21:57:09 thoregon kernel: [ 9941.104373]   [<ffffffff810b5a55>]
__alloc_pages_nodemask+0x75/0x800
Nov  6 21:57:09 thoregon kernel: [ 9941.104375]   [<ffffffff810b6262>]
__get_free_pages+0x12/0x40
Nov  6 21:57:09 thoregon kernel: [ 9941.104377]   [<ffffffff8102d7f0>]
pte_alloc_one_kernel+0x10/0x20
Nov  6 21:57:09 thoregon kernel: [ 9941.104380]   [<ffffffff810cc3e6>]
__pte_alloc_kernel+0x16/0x90
Nov  6 21:57:09 thoregon kernel: [ 9941.104382]   [<ffffffff810d9f37>]
vmap_page_range_noflush+0x287/0x320
Nov  6 21:57:09 thoregon kernel: [ 9941.104385]   [<ffffffff810dbe54>]
vm_map_ram+0x694/0x770
Nov  6 21:57:09 thoregon kernel: [ 9941.104386]   [<ffffffff811e1316>]
_xfs_buf_map_pages+0x46/0xe0
Nov  6 21:57:09 thoregon kernel: [ 9941.104389]   [<ffffffff811e222a>]
xfs_buf_get_map+0x8a/0x130
Nov  6 21:57:09 thoregon kernel: [ 9941.104391]   [<ffffffff81233ab9>]
xfs_trans_get_buf_map+0xa9/0xd0
Nov  6 21:57:09 thoregon kernel: [ 9941.104393]   [<ffffffff8121e5a9>]
xfs_ifree_cluster+0x129/0x670
Nov  6 21:57:09 thoregon kernel: [ 9941.104396]   [<ffffffff8121fbc9>]
xfs_ifree+0xe9/0xf0
Nov  6 21:57:09 thoregon kernel: [ 9941.104398]   [<ffffffff811f4d2f>]
xfs_inactive+0x2af/0x480
Nov  6 21:57:09 thoregon kernel: [ 9941.104400]   [<ffffffff811efe00>]
xfs_fs_evict_inode+0x70/0x80
Nov  6 21:57:09 thoregon kernel: [ 9941.104402]   [<ffffffff8110cb8f>]
evict+0xaf/0x1b0
Nov  6 21:57:09 thoregon kernel: [ 9941.104405]   [<ffffffff8110cd95>]
iput+0x105/0x210
Nov  6 21:57:09 thoregon kernel: [ 9941.104406]   [<ffffffff81107ba0>]
d_delete+0x150/0x190
Nov  6 21:57:09 thoregon kernel: [ 9941.104408]   [<ffffffff810ff8a7>]
vfs_rmdir+0x107/0x120
Nov  6 21:57:09 thoregon kernel: [ 9941.104411]   [<ffffffff810ff9a4>]
do_rmdir+0xe4/0x130
Nov  6 21:57:09 thoregon kernel: [ 9941.104413]   [<ffffffff81101c01>]
sys_rmdir+0x11/0x20
Nov  6 21:57:09 thoregon kernel: [ 9941.104415]   [<ffffffff816b2d12>]
system_call_fastpath+0x16/0x1b
Nov  6 21:57:09 thoregon kernel: [ 9941.104417] irq event stamp: 18505
Nov  6 21:57:09 thoregon kernel: [ 9941.104418] hardirqs last  enabled
at (18505): [<ffffffff816aec5d>] mutex_trylock+0xfd/0x170
Nov  6 21:57:09 thoregon kernel: [ 9941.104421] hardirqs last disabled
at (18504): [<ffffffff816aeb9e>] mutex_trylock+0x3e/0x170
Nov  6 21:57:09 thoregon kernel: [ 9941.104423] softirqs last  enabled
at (18492): [<ffffffff81042fb1>] __do_softirq+0x111/0x170
Nov  6 21:57:09 thoregon kernel: [ 9941.104426] softirqs last disabled
at (18477): [<ffffffff816b3e3c>] call_softirq+0x1c/0x30
Nov  6 21:57:09 thoregon kernel: [ 9941.104428]
Nov  6 21:57:09 thoregon kernel: [ 9941.104428] other info that might
help us debug this:
Nov  6 21:57:09 thoregon kernel: [ 9941.104429]  Possible unsafe
locking scenario:
Nov  6 21:57:09 thoregon kernel: [ 9941.104429]
Nov  6 21:57:09 thoregon kernel: [ 9941.104430]        CPU0
Nov  6 21:57:09 thoregon kernel: [ 9941.104431]        ----
Nov  6 21:57:09 thoregon kernel: [ 9941.104432]   lock(&(&ip->i_lock)->mr_lock);
Nov  6 21:57:09 thoregon kernel: [ 9941.104433]   <Interrupt>
Nov  6 21:57:09 thoregon kernel: [ 9941.104434]
lock(&(&ip->i_lock)->mr_lock);
Nov  6 21:57:09 thoregon kernel: [ 9941.104435]
Nov  6 21:57:09 thoregon kernel: [ 9941.104435]  *** DEADLOCK ***
Nov  6 21:57:09 thoregon kernel: [ 9941.104435]
Nov  6 21:57:09 thoregon kernel: [ 9941.104437] 3 locks held by kswapd0/725:
Nov  6 21:57:09 thoregon kernel: [ 9941.104438]  #0:
(shrinker_rwsem){++++..}, at: [<ffffffff810bbd22>]
shrink_slab+0x32/0x1f0
Nov  6 21:57:09 thoregon kernel: [ 9941.104442]  #1:
(&type->s_umount_key#20){++++.+}, at: [<ffffffff810f5a8e>]
grab_super_passive+0x3e/0x90
Nov  6 21:57:09 thoregon kernel: [ 9941.104446]  #2:
(&pag->pag_ici_reclaim_lock){+.+...}, at: [<ffffffff811f28ac>]
xfs_reclaim_inodes_ag+0xbc/0x4f0
Nov  6 21:57:09 thoregon kernel: [ 9941.104449]
Nov  6 21:57:09 thoregon kernel: [ 9941.104449] stack backtrace:
Nov  6 21:57:09 thoregon kernel: [ 9941.104451] Pid: 725, comm:
kswapd0 Not tainted 3.7.0-rc4 #1
Nov  6 21:57:09 thoregon kernel: [ 9941.104452] Call Trace:
Nov  6 21:57:09 thoregon kernel: [ 9941.104456]  [<ffffffff816a829c>]
print_usage_bug+0x1f5/0x206
Nov  6 21:57:09 thoregon kernel: [ 9941.104460]  [<ffffffff8100ed8a>]
? save_stack_trace+0x2a/0x50
Nov  6 21:57:09 thoregon kernel: [ 9941.104462]  [<ffffffff8107e9fd>]
mark_lock+0x28d/0x2f0
Nov  6 21:57:09 thoregon kernel: [ 9941.104464]  [<ffffffff8107de30>]
? print_irq_inversion_bug.part.37+0x1f0/0x1f0
Nov  6 21:57:09 thoregon kernel: [ 9941.104466]  [<ffffffff8107efdf>]
__lock_acquire+0x57f/0x1c00
Nov  6 21:57:09 thoregon kernel: [ 9941.104468]  [<ffffffff8107c899>]
? __lock_is_held+0x59/0x70
Nov  6 21:57:09 thoregon kernel: [ 9941.104470]  [<ffffffff81080b55>]
lock_acquire+0x55/0x70
Nov  6 21:57:09 thoregon kernel: [ 9941.104472]  [<ffffffff811e8164>]
? xfs_ilock+0x84/0xb0
Nov  6 21:57:09 thoregon kernel: [ 9941.104476]  [<ffffffff8106126a>]
down_write_nested+0x4a/0x70
Nov  6 21:57:09 thoregon kernel: [ 9941.104477]  [<ffffffff811e8164>]
? xfs_ilock+0x84/0xb0
Nov  6 21:57:09 thoregon kernel: [ 9941.104479]  [<ffffffff811e8164>]
xfs_ilock+0x84/0xb0
Nov  6 21:57:09 thoregon kernel: [ 9941.104481]  [<ffffffff811f1ce6>]
xfs_reclaim_inode+0x136/0x340
Nov  6 21:57:09 thoregon kernel: [ 9941.104483]  [<ffffffff811f2aaf>]
xfs_reclaim_inodes_ag+0x2bf/0x4f0
Nov  6 21:57:09 thoregon kernel: [ 9941.104485]  [<ffffffff811f28d0>]
? xfs_reclaim_inodes_ag+0xe0/0x4f0
Nov  6 21:57:09 thoregon kernel: [ 9941.104487]  [<ffffffff811f2e1e>]
xfs_reclaim_inodes_nr+0x2e/0x40
Nov  6 21:57:09 thoregon kernel: [ 9941.104489]  [<ffffffff811ef6f0>]
xfs_fs_free_cached_objects+0x10/0x20
Nov  6 21:57:09 thoregon kernel: [ 9941.104491]  [<ffffffff810f5bf3>]
prune_super+0x113/0x1a0
Nov  6 21:57:09 thoregon kernel: [ 9941.104493]  [<ffffffff810bbe0e>]
shrink_slab+0x11e/0x1f0
Nov  6 21:57:09 thoregon kernel: [ 9941.104496]  [<ffffffff810be400>]
kswapd+0x690/0xa10
Nov  6 21:57:09 thoregon kernel: [ 9941.104498]  [<ffffffff8105ca30>]
? __init_waitqueue_head+0x60/0x60
Nov  6 21:57:09 thoregon kernel: [ 9941.104500]  [<ffffffff810bdd70>]
? shrink_lruvec+0x540/0x540
Nov  6 21:57:09 thoregon kernel: [ 9941.104502]  [<ffffffff8105c246>]
kthread+0xd6/0xe0
Nov  6 21:57:09 thoregon kernel: [ 9941.104504]  [<ffffffff816b1efb>]
? _raw_spin_unlock_irq+0x2b/0x50
Nov  6 21:57:09 thoregon kernel: [ 9941.104506]  [<ffffffff8105c170>]
? flush_kthread_worker+0xe0/0xe0
Nov  6 21:57:09 thoregon kernel: [ 9941.104508]  [<ffffffff816b2c6c>]
ret_from_fork+0x7c/0xb0
Nov  6 21:57:09 thoregon kernel: [ 9941.104510]  [<ffffffff8105c170>]
? flush_kthread_worker+0xe0/0xe0

Nov 17 14:07:38 thoregon kernel: [66571.610863]
Nov 17 14:07:38 thoregon kernel: [66571.610869]
=========================================================
Nov 17 14:07:38 thoregon kernel: [66571.610870] [ INFO: possible irq
lock inversion dependency detected ]
Nov 17 14:07:38 thoregon kernel: [66571.610873] 3.7.0-rc5 #1 Not tainted
Nov 17 14:07:38 thoregon kernel: [66571.610874]
---------------------------------------------------------
Nov 17 14:07:38 thoregon kernel: [66571.610875] cc1/21330 just changed
the state of lock:
Nov 17 14:07:38 thoregon kernel: [66571.610877]
(sb_internal){.+.+.?}, at: [<ffffffff8122b138>]
xfs_trans_alloc+0x28/0x50
Nov 17 14:07:38 thoregon kernel: [66571.610885] but this lock took
another, RECLAIM_FS-unsafe lock in the past:
Nov 17 14:07:38 thoregon kernel: [66571.610886]
(&(&ip->i_lock)->mr_lock/1){+.+.+.}
Nov 17 14:07:38 thoregon kernel: [66571.610886]
Nov 17 14:07:39 thoregon kernel: [66571.610886] and interrupts could
create inverse lock ordering between them.
Nov 17 14:07:39 thoregon kernel: [66571.610886]
Nov 17 14:07:39 thoregon kernel: [66571.610890]
Nov 17 14:07:39 thoregon kernel: [66571.610890] other info that might
help us debug this:
Nov 17 14:07:39 thoregon kernel: [66571.610891]  Possible interrupt
unsafe locking scenario:
Nov 17 14:07:39 thoregon kernel: [66571.610891]
Nov 17 14:07:39 thoregon kernel: [66571.610892]        CPU0
        CPU1
Nov 17 14:07:39 thoregon kernel: [66571.610893]        ----
        ----
Nov 17 14:07:39 thoregon kernel: [66571.610894]
lock(&(&ip->i_lock)->mr_lock/1);
Nov 17 14:07:39 thoregon kernel: [66571.610896]
        local_irq_disable();
Nov 17 14:07:39 thoregon kernel: [66571.610897]
        lock(sb_internal);
Nov 17 14:07:39 thoregon kernel: [66571.610898]
        lock(&(&ip->i_lock)->mr_lock/1);
Nov 17 14:07:39 thoregon kernel: [66571.610900]   <Interrupt>
Nov 17 14:07:39 thoregon kernel: [66571.610901]     lock(sb_internal);
Nov 17 14:07:39 thoregon kernel: [66571.610902]
Nov 17 14:07:39 thoregon kernel: [66571.610902]  *** DEADLOCK ***
Nov 17 14:07:39 thoregon kernel: [66571.610902]
Nov 17 14:07:39 thoregon kernel: [66571.610904] 3 locks held by cc1/21330:
Nov 17 14:07:39 thoregon kernel: [66571.610905]  #0:
(&mm->mmap_sem){++++++}, at: [<ffffffff81029d8b>]
__do_page_fault+0xfb/0x480
Nov 17 14:07:39 thoregon kernel: [66571.610910]  #1:
(shrinker_rwsem){++++..}, at: [<ffffffff810bbd02>]
shrink_slab+0x32/0x1f0
Nov 17 14:07:39 thoregon kernel: [66571.610915]  #2:
(&type->s_umount_key#20){++++.+}, at: [<ffffffff810f5a7e>]
grab_super_passive+0x3e/0x90
Nov 17 14:07:39 thoregon kernel: [66571.610921]
Nov 17 14:07:39 thoregon kernel: [66571.610921] the shortest
dependencies between 2nd lock and 1st lock:
Nov 17 14:07:39 thoregon kernel: [66571.610927]  ->
(&(&ip->i_lock)->mr_lock/1){+.+.+.} ops: 169649 {
Nov 17 14:07:39 thoregon kernel: [66571.610931]     HARDIRQ-ON-W at:
Nov 17 14:07:39 thoregon kernel: [66571.610932]
[<ffffffff8107f091>] __lock_acquire+0x631/0x1c00
Nov 17 14:07:39 thoregon kernel: [66571.610935]
[<ffffffff81080b55>] lock_acquire+0x55/0x70
Nov 17 14:07:39 thoregon kernel: [66571.610937]
[<ffffffff8106126a>] down_write_nested+0x4a/0x70
Nov 17 14:07:39 thoregon kernel: [66571.610941]
[<ffffffff811e8164>] xfs_ilock+0x84/0xb0
Nov 17 14:07:39 thoregon kernel: [66571.610944]
[<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
Nov 17 14:07:39 thoregon kernel: [66571.610946]
[<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.610948]
[<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
Nov 17 14:07:39 thoregon kernel: [66571.610950]
[<ffffffff81100322>] vfs_create+0x72/0xc0
Nov 17 14:07:39 thoregon kernel: [66571.610953]
[<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
Nov 17 14:07:39 thoregon kernel: [66571.610955]
[<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
Nov 17 14:07:39 thoregon kernel: [66571.610957]
[<ffffffff8110183d>] do_filp_open+0x3d/0xa0
Nov 17 14:07:39 thoregon kernel: [66571.610959]
[<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
Nov 17 14:07:39 thoregon kernel: [66571.610962]
[<ffffffff810f222c>] sys_open+0x1c/0x20
Nov 17 14:07:39 thoregon kernel: [66571.610964]
[<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
Nov 17 14:07:39 thoregon kernel: [66571.610967]     SOFTIRQ-ON-W at:
Nov 17 14:07:39 thoregon kernel: [66571.610968]
[<ffffffff8107f0c7>] __lock_acquire+0x667/0x1c00
Nov 17 14:07:39 thoregon kernel: [66571.610970]
[<ffffffff81080b55>] lock_acquire+0x55/0x70
Nov 17 14:07:39 thoregon kernel: [66571.610972]
[<ffffffff8106126a>] down_write_nested+0x4a/0x70
Nov 17 14:07:39 thoregon kernel: [66571.610974]
[<ffffffff811e8164>] xfs_ilock+0x84/0xb0
Nov 17 14:07:39 thoregon kernel: [66571.610976]
[<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
Nov 17 14:07:39 thoregon kernel: [66571.610977]
[<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.610979]
[<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
Nov 17 14:07:39 thoregon kernel: [66571.610981]
[<ffffffff81100322>] vfs_create+0x72/0xc0
Nov 17 14:07:39 thoregon kernel: [66571.610983]
[<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
Nov 17 14:07:39 thoregon kernel: [66571.610985]
[<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
Nov 17 14:07:39 thoregon kernel: [66571.610987]
[<ffffffff8110183d>] do_filp_open+0x3d/0xa0
Nov 17 14:07:39 thoregon kernel: [66571.610989]
[<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
Nov 17 14:07:39 thoregon kernel: [66571.610991]
[<ffffffff810f222c>] sys_open+0x1c/0x20
Nov 17 14:07:39 thoregon kernel: [66571.610993]
[<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
Nov 17 14:07:39 thoregon kernel: [66571.610995]     RECLAIM_FS-ON-W at:
Nov 17 14:07:39 thoregon kernel: [66571.610996]
  [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
Nov 17 14:07:39 thoregon kernel: [66571.610998]
  [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
Nov 17 14:07:39 thoregon kernel: [66571.610999]
  [<ffffffff810e9dc5>] kmem_cache_alloc+0x35/0xe0
Nov 17 14:07:39 thoregon kernel: [66571.611002]
  [<ffffffff810dba21>] vm_map_ram+0x271/0x770
Nov 17 14:07:39 thoregon kernel: [66571.611004]
  [<ffffffff811e12b6>] _xfs_buf_map_pages+0x46/0xe0
Nov 17 14:07:39 thoregon kernel: [66571.611008]
  [<ffffffff811e21ca>] xfs_buf_get_map+0x8a/0x130
Nov 17 14:07:39 thoregon kernel: [66571.611009]
  [<ffffffff81233989>] xfs_trans_get_buf_map+0xa9/0xd0
Nov 17 14:07:39 thoregon kernel: [66571.611011]
  [<ffffffff8121bc2d>] xfs_ialloc_inode_init+0xcd/0x1d0
Nov 17 14:07:39 thoregon kernel: [66571.611015]
  [<ffffffff8121c16f>] xfs_ialloc_ag_alloc+0x18f/0x500
Nov 17 14:07:39 thoregon kernel: [66571.611017]
  [<ffffffff8121d955>] xfs_dialloc+0x185/0x2a0
Nov 17 14:07:39 thoregon kernel: [66571.611019]
  [<ffffffff8121f068>] xfs_ialloc+0x58/0x650
Nov 17 14:07:39 thoregon kernel: [66571.611021]
  [<ffffffff811f3995>] xfs_dir_ialloc+0x65/0x270
Nov 17 14:07:39 thoregon kernel: [66571.611023]
  [<ffffffff811f537c>] xfs_create+0x3ac/0x5a0
Nov 17 14:07:39 thoregon kernel: [66571.611024]
  [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.611026]
  [<ffffffff811ecb61>] xfs_vn_mkdir+0x11/0x20
Nov 17 14:07:39 thoregon kernel: [66571.611028]
  [<ffffffff8110016f>] vfs_mkdir+0x7f/0xd0
Nov 17 14:07:39 thoregon kernel: [66571.611030]
  [<ffffffff81101b83>] sys_mkdirat+0x43/0x80
Nov 17 14:07:39 thoregon kernel: [66571.611032]
  [<ffffffff81101bd4>] sys_mkdir+0x14/0x20
Nov 17 14:07:39 thoregon kernel: [66571.611034]
  [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
Nov 17 14:07:39 thoregon kernel: [66571.611036]     INITIAL USE at:
Nov 17 14:07:39 thoregon kernel: [66571.611037]
[<ffffffff8107ed49>] __lock_acquire+0x2e9/0x1c00
Nov 17 14:07:39 thoregon kernel: [66571.611038]
[<ffffffff81080b55>] lock_acquire+0x55/0x70
Nov 17 14:07:39 thoregon kernel: [66571.611040]
[<ffffffff8106126a>] down_write_nested+0x4a/0x70
Nov 17 14:07:39 thoregon kernel: [66571.611042]
[<ffffffff811e8164>] xfs_ilock+0x84/0xb0
Nov 17 14:07:39 thoregon kernel: [66571.611044]
[<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
Nov 17 14:07:39 thoregon kernel: [66571.611046]
[<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.611047]
[<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
Nov 17 14:07:39 thoregon kernel: [66571.611049]
[<ffffffff81100322>] vfs_create+0x72/0xc0
Nov 17 14:07:39 thoregon kernel: [66571.611051]
[<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
Nov 17 14:07:39 thoregon kernel: [66571.611053]
[<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
Nov 17 14:07:39 thoregon kernel: [66571.611055]
[<ffffffff8110183d>] do_filp_open+0x3d/0xa0
Nov 17 14:07:39 thoregon kernel: [66571.611057]
[<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
Nov 17 14:07:39 thoregon kernel: [66571.611059]
[<ffffffff810f222c>] sys_open+0x1c/0x20
Nov 17 14:07:39 thoregon kernel: [66571.611061]
[<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
Nov 17 14:07:39 thoregon kernel: [66571.611063]   }
Nov 17 14:07:39 thoregon kernel: [66571.611064]   ... key      at:
[<ffffffff825b4b81>] __key.41357+0x1/0x8
Nov 17 14:07:39 thoregon kernel: [66571.611066]   ... acquired at:
Nov 17 14:07:39 thoregon kernel: [66571.611067]
[<ffffffff81080b55>] lock_acquire+0x55/0x70
Nov 17 14:07:39 thoregon kernel: [66571.611069]
[<ffffffff8106126a>] down_write_nested+0x4a/0x70
Nov 17 14:07:39 thoregon kernel: [66571.611071]
[<ffffffff811e8164>] xfs_ilock+0x84/0xb0
Nov 17 14:07:39 thoregon kernel: [66571.611073]
[<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
Nov 17 14:07:39 thoregon kernel: [66571.611074]
[<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.611076]
[<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
Nov 17 14:07:39 thoregon kernel: [66571.611078]
[<ffffffff81100322>] vfs_create+0x72/0xc0
Nov 17 14:07:39 thoregon kernel: [66571.611080]
[<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
Nov 17 14:07:39 thoregon kernel: [66571.611082]
[<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
Nov 17 14:07:39 thoregon kernel: [66571.611084]
[<ffffffff8110183d>] do_filp_open+0x3d/0xa0
Nov 17 14:07:39 thoregon kernel: [66571.611086]
[<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
Nov 17 14:07:39 thoregon kernel: [66571.611088]
[<ffffffff810f222c>] sys_open+0x1c/0x20
Nov 17 14:07:39 thoregon kernel: [66571.611090]
[<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
Nov 17 14:07:39 thoregon kernel: [66571.611091]
Nov 17 14:07:39 thoregon kernel: [66571.611092] ->
(sb_internal){.+.+.?} ops: 1341531 {
Nov 17 14:07:39 thoregon kernel: [66571.611095]    HARDIRQ-ON-R at:
Nov 17 14:07:39 thoregon kernel: [66571.611096]
[<ffffffff8107ef6a>] __lock_acquire+0x50a/0x1c00
Nov 17 14:07:39 thoregon kernel: [66571.611098]
[<ffffffff81080b55>] lock_acquire+0x55/0x70
Nov 17 14:07:39 thoregon kernel: [66571.611100]
[<ffffffff810f451b>] __sb_start_write+0xab/0x190
Nov 17 14:07:39 thoregon kernel: [66571.611102]
[<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611104]
[<ffffffff811f5157>] xfs_create+0x187/0x5a0
Nov 17 14:07:39 thoregon kernel: [66571.611105]
[<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.611107]
[<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
Nov 17 14:07:39 thoregon kernel: [66571.611109]
[<ffffffff81100322>] vfs_create+0x72/0xc0
Nov 17 14:07:39 thoregon kernel: [66571.611111]
[<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
Nov 17 14:07:39 thoregon kernel: [66571.611113]
[<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
Nov 17 14:07:39 thoregon kernel: [66571.611115]
[<ffffffff8110183d>] do_filp_open+0x3d/0xa0
Nov 17 14:07:39 thoregon kernel: [66571.611117]
[<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
Nov 17 14:07:39 thoregon kernel: [66571.611119]
[<ffffffff810f222c>] sys_open+0x1c/0x20
Nov 17 14:07:39 thoregon kernel: [66571.611121]
[<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
Nov 17 14:07:39 thoregon kernel: [66571.611123]    SOFTIRQ-ON-R at:
Nov 17 14:07:39 thoregon kernel: [66571.611124]
[<ffffffff8107f0c7>] __lock_acquire+0x667/0x1c00
Nov 17 14:07:39 thoregon kernel: [66571.611126]
[<ffffffff81080b55>] lock_acquire+0x55/0x70
Nov 17 14:07:39 thoregon kernel: [66571.611128]
[<ffffffff810f451b>] __sb_start_write+0xab/0x190
Nov 17 14:07:39 thoregon kernel: [66571.611130]
[<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611132]
[<ffffffff811f5157>] xfs_create+0x187/0x5a0
Nov 17 14:07:39 thoregon kernel: [66571.611133]
[<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.611135]
[<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
Nov 17 14:07:39 thoregon kernel: [66571.611137]
[<ffffffff81100322>] vfs_create+0x72/0xc0
Nov 17 14:07:39 thoregon kernel: [66571.611139]
[<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
Nov 17 14:07:39 thoregon kernel: [66571.611141]
[<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
Nov 17 14:07:39 thoregon kernel: [66571.611143]
[<ffffffff8110183d>] do_filp_open+0x3d/0xa0
Nov 17 14:07:39 thoregon kernel: [66571.611145]
[<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
Nov 17 14:07:39 thoregon kernel: [66571.611147]
[<ffffffff810f222c>] sys_open+0x1c/0x20
Nov 17 14:07:39 thoregon kernel: [66571.611149]
[<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
Nov 17 14:07:39 thoregon kernel: [66571.611151]    IN-RECLAIM_FS-R at:
Nov 17 14:07:39 thoregon kernel: [66571.611152]
[<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
Nov 17 14:07:39 thoregon kernel: [66571.611154]
[<ffffffff81080b55>] lock_acquire+0x55/0x70
Nov 17 14:07:39 thoregon kernel: [66571.611156]
[<ffffffff810f451b>] __sb_start_write+0xab/0x190
Nov 17 14:07:39 thoregon kernel: [66571.611158]
[<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611159]
[<ffffffff811f3e64>] xfs_free_eofblocks+0x104/0x250
Nov 17 14:07:39 thoregon kernel: [66571.611161]
[<ffffffff811f4b39>] xfs_inactive+0xa9/0x480
Nov 17 14:07:39 thoregon kernel: [66571.611163]
[<ffffffff811efe10>] xfs_fs_evict_inode+0x70/0x80
Nov 17 14:07:39 thoregon kernel: [66571.611165]
[<ffffffff8110cb7f>] evict+0xaf/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.611168]
[<ffffffff8110d209>] dispose_list+0x39/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611170]
[<ffffffff8110dc63>] prune_icache_sb+0x183/0x340
Nov 17 14:07:39 thoregon kernel: [66571.611172]
[<ffffffff810f5bc3>] prune_super+0xf3/0x1a0
Nov 17 14:07:39 thoregon kernel: [66571.611173]
[<ffffffff810bbdee>] shrink_slab+0x11e/0x1f0
Nov 17 14:07:39 thoregon kernel: [66571.611175]
[<ffffffff810be98f>] try_to_free_pages+0x21f/0x4e0
Nov 17 14:07:39 thoregon kernel: [66571.611177]
[<ffffffff810b5ec6>] __alloc_pages_nodemask+0x506/0x800
Nov 17 14:07:39 thoregon kernel: [66571.611179]
[<ffffffff810ce56e>] handle_pte_fault+0x5ae/0x7a0
Nov 17 14:07:39 thoregon kernel: [66571.611182]
[<ffffffff810cf769>] handle_mm_fault+0x1f9/0x2a0
Nov 17 14:07:39 thoregon kernel: [66571.611184]
[<ffffffff81029dfc>] __do_page_fault+0x16c/0x480
Nov 17 14:07:39 thoregon kernel: [66571.611186]
[<ffffffff8102a139>] do_page_fault+0x9/0x10
Nov 17 14:07:39 thoregon kernel: [66571.611188]
[<ffffffff816b287f>] page_fault+0x1f/0x30
Nov 17 14:07:39 thoregon kernel: [66571.611190]    RECLAIM_FS-ON-R at:
Nov 17 14:07:39 thoregon kernel: [66571.611191]
[<ffffffff8108137e>] mark_held_locks+0x7e/0x130
Nov 17 14:07:39 thoregon kernel: [66571.611193]
[<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
Nov 17 14:07:39 thoregon kernel: [66571.611195]
[<ffffffff810e9dc5>] kmem_cache_alloc+0x35/0xe0
Nov 17 14:07:39 thoregon kernel: [66571.611197]
[<ffffffff811f6d4f>] kmem_zone_alloc+0x5f/0xe0
Nov 17 14:07:39 thoregon kernel: [66571.611198]
[<ffffffff811f6de8>] kmem_zone_zalloc+0x18/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611200]
[<ffffffff8122b0b2>] _xfs_trans_alloc+0x32/0x90
Nov 17 14:07:39 thoregon kernel: [66571.611202]
[<ffffffff8122b148>] xfs_trans_alloc+0x38/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611204]
[<ffffffff811f5157>] xfs_create+0x187/0x5a0
Nov 17 14:07:39 thoregon kernel: [66571.611205]
[<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.611207]
[<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
Nov 17 14:07:39 thoregon kernel: [66571.611209]
[<ffffffff81100322>] vfs_create+0x72/0xc0
Nov 17 14:07:39 thoregon kernel: [66571.611211]
[<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
Nov 17 14:07:39 thoregon kernel: [66571.611213]
[<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
Nov 17 14:07:39 thoregon kernel: [66571.611215]
[<ffffffff8110183d>] do_filp_open+0x3d/0xa0
Nov 17 14:07:39 thoregon kernel: [66571.611217]
[<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
Nov 17 14:07:39 thoregon kernel: [66571.611219]
[<ffffffff810f222c>] sys_open+0x1c/0x20
Nov 17 14:07:39 thoregon kernel: [66571.611221]
[<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
Nov 17 14:07:39 thoregon kernel: [66571.611223]    INITIAL USE at:
Nov 17 14:07:39 thoregon kernel: [66571.611224]
[<ffffffff8107ed49>] __lock_acquire+0x2e9/0x1c00
Nov 17 14:07:39 thoregon kernel: [66571.611225]
[<ffffffff81080b55>] lock_acquire+0x55/0x70
Nov 17 14:07:39 thoregon kernel: [66571.611227]
[<ffffffff810f451b>] __sb_start_write+0xab/0x190
Nov 17 14:07:39 thoregon kernel: [66571.611229]
[<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611231]
[<ffffffff811f5157>] xfs_create+0x187/0x5a0
Nov 17 14:07:39 thoregon kernel: [66571.611232]
[<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.611234]
[<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
Nov 17 14:07:39 thoregon kernel: [66571.611236]
[<ffffffff81100322>] vfs_create+0x72/0xc0
Nov 17 14:07:39 thoregon kernel: [66571.611238]
[<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
Nov 17 14:07:39 thoregon kernel: [66571.611240]
[<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
Nov 17 14:07:39 thoregon kernel: [66571.611242]
[<ffffffff8110183d>] do_filp_open+0x3d/0xa0
Nov 17 14:07:39 thoregon kernel: [66571.611244]
[<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
Nov 17 14:07:39 thoregon kernel: [66571.611246]
[<ffffffff810f222c>] sys_open+0x1c/0x20
Nov 17 14:07:39 thoregon kernel: [66571.611248]
[<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
Nov 17 14:07:39 thoregon kernel: [66571.611250]  }
Nov 17 14:07:39 thoregon kernel: [66571.611251]  ... key      at:
[<ffffffff81c34e40>] xfs_fs_type+0x60/0x80
Nov 17 14:07:39 thoregon kernel: [66571.611254]  ... acquired at:
Nov 17 14:07:39 thoregon kernel: [66571.611254]
[<ffffffff8107df3b>] check_usage_forwards+0x10b/0x140
Nov 17 14:07:39 thoregon kernel: [66571.611256]
[<ffffffff8107e900>] mark_lock+0x190/0x2f0
Nov 17 14:07:39 thoregon kernel: [66571.611258]
[<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
Nov 17 14:07:39 thoregon kernel: [66571.611260]
[<ffffffff81080b55>] lock_acquire+0x55/0x70
Nov 17 14:07:39 thoregon kernel: [66571.611261]
[<ffffffff810f451b>] __sb_start_write+0xab/0x190
Nov 17 14:07:39 thoregon kernel: [66571.611263]
[<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611265]
[<ffffffff811f3e64>] xfs_free_eofblocks+0x104/0x250
Nov 17 14:07:39 thoregon kernel: [66571.611266]
[<ffffffff811f4b39>] xfs_inactive+0xa9/0x480
Nov 17 14:07:39 thoregon kernel: [66571.611268]
[<ffffffff811efe10>] xfs_fs_evict_inode+0x70/0x80
Nov 17 14:07:39 thoregon kernel: [66571.611270]
[<ffffffff8110cb7f>] evict+0xaf/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.611271]
[<ffffffff8110d209>] dispose_list+0x39/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611273]
[<ffffffff8110dc63>] prune_icache_sb+0x183/0x340
Nov 17 14:07:39 thoregon kernel: [66571.611275]
[<ffffffff810f5bc3>] prune_super+0xf3/0x1a0
Nov 17 14:07:39 thoregon kernel: [66571.611277]
[<ffffffff810bbdee>] shrink_slab+0x11e/0x1f0
Nov 17 14:07:39 thoregon kernel: [66571.611278]
[<ffffffff810be98f>] try_to_free_pages+0x21f/0x4e0
Nov 17 14:07:39 thoregon kernel: [66571.611280]
[<ffffffff810b5ec6>] __alloc_pages_nodemask+0x506/0x800
Nov 17 14:07:39 thoregon kernel: [66571.611282]
[<ffffffff810ce56e>] handle_pte_fault+0x5ae/0x7a0
Nov 17 14:07:39 thoregon kernel: [66571.611284]
[<ffffffff810cf769>] handle_mm_fault+0x1f9/0x2a0
Nov 17 14:07:39 thoregon kernel: [66571.611286]
[<ffffffff81029dfc>] __do_page_fault+0x16c/0x480
Nov 17 14:07:39 thoregon kernel: [66571.611287]
[<ffffffff8102a139>] do_page_fault+0x9/0x10
Nov 17 14:07:39 thoregon kernel: [66571.611289]
[<ffffffff816b287f>] page_fault+0x1f/0x30
Nov 17 14:07:39 thoregon kernel: [66571.611291]
Nov 17 14:07:39 thoregon kernel: [66571.611292]
Nov 17 14:07:39 thoregon kernel: [66571.611292] stack backtrace:
Nov 17 14:07:39 thoregon kernel: [66571.611294] Pid: 21330, comm: cc1
Not tainted 3.7.0-rc5 #1
Nov 17 14:07:39 thoregon kernel: [66571.611295] Call Trace:
Nov 17 14:07:39 thoregon kernel: [66571.611298]  [<ffffffff8107de28>]
print_irq_inversion_bug.part.37+0x1e8/0x1f0
Nov 17 14:07:39 thoregon kernel: [66571.611300]  [<ffffffff8107df3b>]
check_usage_forwards+0x10b/0x140
Nov 17 14:07:39 thoregon kernel: [66571.611303]  [<ffffffff8107e900>]
mark_lock+0x190/0x2f0
Nov 17 14:07:39 thoregon kernel: [66571.611306]  [<ffffffff8150406e>]
? dm_request+0x2e/0x2a0
Nov 17 14:07:39 thoregon kernel: [66571.611308]  [<ffffffff8107de30>]
? print_irq_inversion_bug.part.37+0x1f0/0x1f0
Nov 17 14:07:39 thoregon kernel: [66571.611310]  [<ffffffff8107efdf>]
__lock_acquire+0x57f/0x1c00
Nov 17 14:07:39 thoregon kernel: [66571.611313]  [<ffffffff812202f4>]
? xfs_iext_bno_to_ext+0x84/0x160
Nov 17 14:07:39 thoregon kernel: [66571.611316]  [<ffffffff8120a023>]
? xfs_bmbt_get_all+0x13/0x20
Nov 17 14:07:39 thoregon kernel: [66571.611318]  [<ffffffff81200fb4>]
? xfs_bmap_search_multi_extents+0xa4/0x110
Nov 17 14:07:39 thoregon kernel: [66571.611320]  [<ffffffff81080b55>]
lock_acquire+0x55/0x70
Nov 17 14:07:39 thoregon kernel: [66571.611322]  [<ffffffff8122b138>]
? xfs_trans_alloc+0x28/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611324]  [<ffffffff810f451b>]
__sb_start_write+0xab/0x190
Nov 17 14:07:39 thoregon kernel: [66571.611326]  [<ffffffff8122b138>]
? xfs_trans_alloc+0x28/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611328]  [<ffffffff8122b138>]
? xfs_trans_alloc+0x28/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611330]  [<ffffffff8122b138>]
xfs_trans_alloc+0x28/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611332]  [<ffffffff811f3e64>]
xfs_free_eofblocks+0x104/0x250
Nov 17 14:07:39 thoregon kernel: [66571.611334]  [<ffffffff816b204b>]
? _raw_spin_unlock_irq+0x2b/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611336]  [<ffffffff811f4b39>]
xfs_inactive+0xa9/0x480
Nov 17 14:07:39 thoregon kernel: [66571.611337]  [<ffffffff816b204b>]
? _raw_spin_unlock_irq+0x2b/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611340]  [<ffffffff811efe10>]
xfs_fs_evict_inode+0x70/0x80
Nov 17 14:07:39 thoregon kernel: [66571.611342]  [<ffffffff8110cb7f>]
evict+0xaf/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.611344]  [<ffffffff8110d209>]
dispose_list+0x39/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611346]  [<ffffffff8110dc63>]
prune_icache_sb+0x183/0x340
Nov 17 14:07:39 thoregon kernel: [66571.611347]  [<ffffffff810f5bc3>]
prune_super+0xf3/0x1a0
Nov 17 14:07:39 thoregon kernel: [66571.611349]  [<ffffffff810bbdee>]
shrink_slab+0x11e/0x1f0
Nov 17 14:07:39 thoregon kernel: [66571.611352]  [<ffffffff810be98f>]
try_to_free_pages+0x21f/0x4e0
Nov 17 14:07:39 thoregon kernel: [66571.611354]  [<ffffffff810b5ec6>]
__alloc_pages_nodemask+0x506/0x800
Nov 17 14:07:39 thoregon kernel: [66571.611356]  [<ffffffff810b9e40>]
? lru_deactivate_fn+0x1c0/0x1c0
Nov 17 14:07:39 thoregon kernel: [66571.611358]  [<ffffffff810ce56e>]
handle_pte_fault+0x5ae/0x7a0
Nov 17 14:07:39 thoregon kernel: [66571.611360]  [<ffffffff810cf769>]
handle_mm_fault+0x1f9/0x2a0
Nov 17 14:07:39 thoregon kernel: [66571.611363]  [<ffffffff81029dfc>]
__do_page_fault+0x16c/0x480
Nov 17 14:07:39 thoregon kernel: [66571.611366]  [<ffffffff8129c7ad>]
? trace_hardirqs_off_thunk+0x3a/0x3c
Nov 17 14:07:39 thoregon kernel: [66571.611368]  [<ffffffff8102a139>]
do_page_fault+0x9/0x10
Nov 17 14:07:39 thoregon kernel: [66571.611370]  [<ffffffff816b287f>]
page_fault+0x1f/0x30

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
@ 2012-11-18 10:24       ` Torsten Kaiser
  0 siblings, 0 replies; 31+ messages in thread
From: Torsten Kaiser @ 2012-11-18 10:24 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Linux Kernel, xfs

On Tue, Oct 30, 2012 at 9:37 PM, Torsten Kaiser
<just.for.lkml@googlemail.com> wrote:
> I will keep LOCKDEP enabled on that system, and if there really is
> another splat, I will report back here. But I rather doubt that this
> will be needed.

After the patch, I did not see this problem again, but today I found
another LOCKDEP report that also looks XFS related.
I found it twice in the logs, and as both were slightly different, I
will attach both versions.


Nov  6 21:57:09 thoregon kernel: [ 9941.104345]
Nov  6 21:57:09 thoregon kernel: [ 9941.104350]
=================================
Nov  6 21:57:09 thoregon kernel: [ 9941.104351] [ INFO: inconsistent
lock state ]
Nov  6 21:57:09 thoregon kernel: [ 9941.104353] 3.7.0-rc4 #1 Not tainted
Nov  6 21:57:09 thoregon kernel: [ 9941.104354]
---------------------------------
Nov  6 21:57:09 thoregon kernel: [ 9941.104355] inconsistent
{RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
Nov  6 21:57:09 thoregon kernel: [ 9941.104357] kswapd0/725
[HC0[0]:SC0[0]:HE1:SE1] takes:
Nov  6 21:57:09 thoregon kernel: [ 9941.104359]
(&(&ip->i_lock)->mr_lock){++++?.}, at: [<ffffffff811e8164>]
xfs_ilock+0x84/0xb0
Nov  6 21:57:09 thoregon kernel: [ 9941.104366] {RECLAIM_FS-ON-W}
state was registered at:
Nov  6 21:57:09 thoregon kernel: [ 9941.104367]   [<ffffffff8108137e>]
mark_held_locks+0x7e/0x130
Nov  6 21:57:09 thoregon kernel: [ 9941.104371]   [<ffffffff81081a63>]
lockdep_trace_alloc+0x63/0xc0
Nov  6 21:57:09 thoregon kernel: [ 9941.104373]   [<ffffffff810b5a55>]
__alloc_pages_nodemask+0x75/0x800
Nov  6 21:57:09 thoregon kernel: [ 9941.104375]   [<ffffffff810b6262>]
__get_free_pages+0x12/0x40
Nov  6 21:57:09 thoregon kernel: [ 9941.104377]   [<ffffffff8102d7f0>]
pte_alloc_one_kernel+0x10/0x20
Nov  6 21:57:09 thoregon kernel: [ 9941.104380]   [<ffffffff810cc3e6>]
__pte_alloc_kernel+0x16/0x90
Nov  6 21:57:09 thoregon kernel: [ 9941.104382]   [<ffffffff810d9f37>]
vmap_page_range_noflush+0x287/0x320
Nov  6 21:57:09 thoregon kernel: [ 9941.104385]   [<ffffffff810dbe54>]
vm_map_ram+0x694/0x770
Nov  6 21:57:09 thoregon kernel: [ 9941.104386]   [<ffffffff811e1316>]
_xfs_buf_map_pages+0x46/0xe0
Nov  6 21:57:09 thoregon kernel: [ 9941.104389]   [<ffffffff811e222a>]
xfs_buf_get_map+0x8a/0x130
Nov  6 21:57:09 thoregon kernel: [ 9941.104391]   [<ffffffff81233ab9>]
xfs_trans_get_buf_map+0xa9/0xd0
Nov  6 21:57:09 thoregon kernel: [ 9941.104393]   [<ffffffff8121e5a9>]
xfs_ifree_cluster+0x129/0x670
Nov  6 21:57:09 thoregon kernel: [ 9941.104396]   [<ffffffff8121fbc9>]
xfs_ifree+0xe9/0xf0
Nov  6 21:57:09 thoregon kernel: [ 9941.104398]   [<ffffffff811f4d2f>]
xfs_inactive+0x2af/0x480
Nov  6 21:57:09 thoregon kernel: [ 9941.104400]   [<ffffffff811efe00>]
xfs_fs_evict_inode+0x70/0x80
Nov  6 21:57:09 thoregon kernel: [ 9941.104402]   [<ffffffff8110cb8f>]
evict+0xaf/0x1b0
Nov  6 21:57:09 thoregon kernel: [ 9941.104405]   [<ffffffff8110cd95>]
iput+0x105/0x210
Nov  6 21:57:09 thoregon kernel: [ 9941.104406]   [<ffffffff81107ba0>]
d_delete+0x150/0x190
Nov  6 21:57:09 thoregon kernel: [ 9941.104408]   [<ffffffff810ff8a7>]
vfs_rmdir+0x107/0x120
Nov  6 21:57:09 thoregon kernel: [ 9941.104411]   [<ffffffff810ff9a4>]
do_rmdir+0xe4/0x130
Nov  6 21:57:09 thoregon kernel: [ 9941.104413]   [<ffffffff81101c01>]
sys_rmdir+0x11/0x20
Nov  6 21:57:09 thoregon kernel: [ 9941.104415]   [<ffffffff816b2d12>]
system_call_fastpath+0x16/0x1b
Nov  6 21:57:09 thoregon kernel: [ 9941.104417] irq event stamp: 18505
Nov  6 21:57:09 thoregon kernel: [ 9941.104418] hardirqs last  enabled
at (18505): [<ffffffff816aec5d>] mutex_trylock+0xfd/0x170
Nov  6 21:57:09 thoregon kernel: [ 9941.104421] hardirqs last disabled
at (18504): [<ffffffff816aeb9e>] mutex_trylock+0x3e/0x170
Nov  6 21:57:09 thoregon kernel: [ 9941.104423] softirqs last  enabled
at (18492): [<ffffffff81042fb1>] __do_softirq+0x111/0x170
Nov  6 21:57:09 thoregon kernel: [ 9941.104426] softirqs last disabled
at (18477): [<ffffffff816b3e3c>] call_softirq+0x1c/0x30
Nov  6 21:57:09 thoregon kernel: [ 9941.104428]
Nov  6 21:57:09 thoregon kernel: [ 9941.104428] other info that might
help us debug this:
Nov  6 21:57:09 thoregon kernel: [ 9941.104429]  Possible unsafe
locking scenario:
Nov  6 21:57:09 thoregon kernel: [ 9941.104429]
Nov  6 21:57:09 thoregon kernel: [ 9941.104430]        CPU0
Nov  6 21:57:09 thoregon kernel: [ 9941.104431]        ----
Nov  6 21:57:09 thoregon kernel: [ 9941.104432]   lock(&(&ip->i_lock)->mr_lock);
Nov  6 21:57:09 thoregon kernel: [ 9941.104433]   <Interrupt>
Nov  6 21:57:09 thoregon kernel: [ 9941.104434]
lock(&(&ip->i_lock)->mr_lock);
Nov  6 21:57:09 thoregon kernel: [ 9941.104435]
Nov  6 21:57:09 thoregon kernel: [ 9941.104435]  *** DEADLOCK ***
Nov  6 21:57:09 thoregon kernel: [ 9941.104435]
Nov  6 21:57:09 thoregon kernel: [ 9941.104437] 3 locks held by kswapd0/725:
Nov  6 21:57:09 thoregon kernel: [ 9941.104438]  #0:
(shrinker_rwsem){++++..}, at: [<ffffffff810bbd22>]
shrink_slab+0x32/0x1f0
Nov  6 21:57:09 thoregon kernel: [ 9941.104442]  #1:
(&type->s_umount_key#20){++++.+}, at: [<ffffffff810f5a8e>]
grab_super_passive+0x3e/0x90
Nov  6 21:57:09 thoregon kernel: [ 9941.104446]  #2:
(&pag->pag_ici_reclaim_lock){+.+...}, at: [<ffffffff811f28ac>]
xfs_reclaim_inodes_ag+0xbc/0x4f0
Nov  6 21:57:09 thoregon kernel: [ 9941.104449]
Nov  6 21:57:09 thoregon kernel: [ 9941.104449] stack backtrace:
Nov  6 21:57:09 thoregon kernel: [ 9941.104451] Pid: 725, comm:
kswapd0 Not tainted 3.7.0-rc4 #1
Nov  6 21:57:09 thoregon kernel: [ 9941.104452] Call Trace:
Nov  6 21:57:09 thoregon kernel: [ 9941.104456]  [<ffffffff816a829c>]
print_usage_bug+0x1f5/0x206
Nov  6 21:57:09 thoregon kernel: [ 9941.104460]  [<ffffffff8100ed8a>]
? save_stack_trace+0x2a/0x50
Nov  6 21:57:09 thoregon kernel: [ 9941.104462]  [<ffffffff8107e9fd>]
mark_lock+0x28d/0x2f0
Nov  6 21:57:09 thoregon kernel: [ 9941.104464]  [<ffffffff8107de30>]
? print_irq_inversion_bug.part.37+0x1f0/0x1f0
Nov  6 21:57:09 thoregon kernel: [ 9941.104466]  [<ffffffff8107efdf>]
__lock_acquire+0x57f/0x1c00
Nov  6 21:57:09 thoregon kernel: [ 9941.104468]  [<ffffffff8107c899>]
? __lock_is_held+0x59/0x70
Nov  6 21:57:09 thoregon kernel: [ 9941.104470]  [<ffffffff81080b55>]
lock_acquire+0x55/0x70
Nov  6 21:57:09 thoregon kernel: [ 9941.104472]  [<ffffffff811e8164>]
? xfs_ilock+0x84/0xb0
Nov  6 21:57:09 thoregon kernel: [ 9941.104476]  [<ffffffff8106126a>]
down_write_nested+0x4a/0x70
Nov  6 21:57:09 thoregon kernel: [ 9941.104477]  [<ffffffff811e8164>]
? xfs_ilock+0x84/0xb0
Nov  6 21:57:09 thoregon kernel: [ 9941.104479]  [<ffffffff811e8164>]
xfs_ilock+0x84/0xb0
Nov  6 21:57:09 thoregon kernel: [ 9941.104481]  [<ffffffff811f1ce6>]
xfs_reclaim_inode+0x136/0x340
Nov  6 21:57:09 thoregon kernel: [ 9941.104483]  [<ffffffff811f2aaf>]
xfs_reclaim_inodes_ag+0x2bf/0x4f0
Nov  6 21:57:09 thoregon kernel: [ 9941.104485]  [<ffffffff811f28d0>]
? xfs_reclaim_inodes_ag+0xe0/0x4f0
Nov  6 21:57:09 thoregon kernel: [ 9941.104487]  [<ffffffff811f2e1e>]
xfs_reclaim_inodes_nr+0x2e/0x40
Nov  6 21:57:09 thoregon kernel: [ 9941.104489]  [<ffffffff811ef6f0>]
xfs_fs_free_cached_objects+0x10/0x20
Nov  6 21:57:09 thoregon kernel: [ 9941.104491]  [<ffffffff810f5bf3>]
prune_super+0x113/0x1a0
Nov  6 21:57:09 thoregon kernel: [ 9941.104493]  [<ffffffff810bbe0e>]
shrink_slab+0x11e/0x1f0
Nov  6 21:57:09 thoregon kernel: [ 9941.104496]  [<ffffffff810be400>]
kswapd+0x690/0xa10
Nov  6 21:57:09 thoregon kernel: [ 9941.104498]  [<ffffffff8105ca30>]
? __init_waitqueue_head+0x60/0x60
Nov  6 21:57:09 thoregon kernel: [ 9941.104500]  [<ffffffff810bdd70>]
? shrink_lruvec+0x540/0x540
Nov  6 21:57:09 thoregon kernel: [ 9941.104502]  [<ffffffff8105c246>]
kthread+0xd6/0xe0
Nov  6 21:57:09 thoregon kernel: [ 9941.104504]  [<ffffffff816b1efb>]
? _raw_spin_unlock_irq+0x2b/0x50
Nov  6 21:57:09 thoregon kernel: [ 9941.104506]  [<ffffffff8105c170>]
? flush_kthread_worker+0xe0/0xe0
Nov  6 21:57:09 thoregon kernel: [ 9941.104508]  [<ffffffff816b2c6c>]
ret_from_fork+0x7c/0xb0
Nov  6 21:57:09 thoregon kernel: [ 9941.104510]  [<ffffffff8105c170>]
? flush_kthread_worker+0xe0/0xe0

Nov 17 14:07:38 thoregon kernel: [66571.610863]
Nov 17 14:07:38 thoregon kernel: [66571.610869]
=========================================================
Nov 17 14:07:38 thoregon kernel: [66571.610870] [ INFO: possible irq
lock inversion dependency detected ]
Nov 17 14:07:38 thoregon kernel: [66571.610873] 3.7.0-rc5 #1 Not tainted
Nov 17 14:07:38 thoregon kernel: [66571.610874]
---------------------------------------------------------
Nov 17 14:07:38 thoregon kernel: [66571.610875] cc1/21330 just changed
the state of lock:
Nov 17 14:07:38 thoregon kernel: [66571.610877]
(sb_internal){.+.+.?}, at: [<ffffffff8122b138>]
xfs_trans_alloc+0x28/0x50
Nov 17 14:07:38 thoregon kernel: [66571.610885] but this lock took
another, RECLAIM_FS-unsafe lock in the past:
Nov 17 14:07:38 thoregon kernel: [66571.610886]
(&(&ip->i_lock)->mr_lock/1){+.+.+.}
Nov 17 14:07:38 thoregon kernel: [66571.610886]
Nov 17 14:07:39 thoregon kernel: [66571.610886] and interrupts could
create inverse lock ordering between them.
Nov 17 14:07:39 thoregon kernel: [66571.610886]
Nov 17 14:07:39 thoregon kernel: [66571.610890]
Nov 17 14:07:39 thoregon kernel: [66571.610890] other info that might
help us debug this:
Nov 17 14:07:39 thoregon kernel: [66571.610891]  Possible interrupt
unsafe locking scenario:
Nov 17 14:07:39 thoregon kernel: [66571.610891]
Nov 17 14:07:39 thoregon kernel: [66571.610892]        CPU0
        CPU1
Nov 17 14:07:39 thoregon kernel: [66571.610893]        ----
        ----
Nov 17 14:07:39 thoregon kernel: [66571.610894]
lock(&(&ip->i_lock)->mr_lock/1);
Nov 17 14:07:39 thoregon kernel: [66571.610896]
        local_irq_disable();
Nov 17 14:07:39 thoregon kernel: [66571.610897]
        lock(sb_internal);
Nov 17 14:07:39 thoregon kernel: [66571.610898]
        lock(&(&ip->i_lock)->mr_lock/1);
Nov 17 14:07:39 thoregon kernel: [66571.610900]   <Interrupt>
Nov 17 14:07:39 thoregon kernel: [66571.610901]     lock(sb_internal);
Nov 17 14:07:39 thoregon kernel: [66571.610902]
Nov 17 14:07:39 thoregon kernel: [66571.610902]  *** DEADLOCK ***
Nov 17 14:07:39 thoregon kernel: [66571.610902]
Nov 17 14:07:39 thoregon kernel: [66571.610904] 3 locks held by cc1/21330:
Nov 17 14:07:39 thoregon kernel: [66571.610905]  #0:
(&mm->mmap_sem){++++++}, at: [<ffffffff81029d8b>]
__do_page_fault+0xfb/0x480
Nov 17 14:07:39 thoregon kernel: [66571.610910]  #1:
(shrinker_rwsem){++++..}, at: [<ffffffff810bbd02>]
shrink_slab+0x32/0x1f0
Nov 17 14:07:39 thoregon kernel: [66571.610915]  #2:
(&type->s_umount_key#20){++++.+}, at: [<ffffffff810f5a7e>]
grab_super_passive+0x3e/0x90
Nov 17 14:07:39 thoregon kernel: [66571.610921]
Nov 17 14:07:39 thoregon kernel: [66571.610921] the shortest
dependencies between 2nd lock and 1st lock:
Nov 17 14:07:39 thoregon kernel: [66571.610927]  ->
(&(&ip->i_lock)->mr_lock/1){+.+.+.} ops: 169649 {
Nov 17 14:07:39 thoregon kernel: [66571.610931]     HARDIRQ-ON-W at:
Nov 17 14:07:39 thoregon kernel: [66571.610932]
[<ffffffff8107f091>] __lock_acquire+0x631/0x1c00
Nov 17 14:07:39 thoregon kernel: [66571.610935]
[<ffffffff81080b55>] lock_acquire+0x55/0x70
Nov 17 14:07:39 thoregon kernel: [66571.610937]
[<ffffffff8106126a>] down_write_nested+0x4a/0x70
Nov 17 14:07:39 thoregon kernel: [66571.610941]
[<ffffffff811e8164>] xfs_ilock+0x84/0xb0
Nov 17 14:07:39 thoregon kernel: [66571.610944]
[<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
Nov 17 14:07:39 thoregon kernel: [66571.610946]
[<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.610948]
[<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
Nov 17 14:07:39 thoregon kernel: [66571.610950]
[<ffffffff81100322>] vfs_create+0x72/0xc0
Nov 17 14:07:39 thoregon kernel: [66571.610953]
[<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
Nov 17 14:07:39 thoregon kernel: [66571.610955]
[<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
Nov 17 14:07:39 thoregon kernel: [66571.610957]
[<ffffffff8110183d>] do_filp_open+0x3d/0xa0
Nov 17 14:07:39 thoregon kernel: [66571.610959]
[<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
Nov 17 14:07:39 thoregon kernel: [66571.610962]
[<ffffffff810f222c>] sys_open+0x1c/0x20
Nov 17 14:07:39 thoregon kernel: [66571.610964]
[<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
Nov 17 14:07:39 thoregon kernel: [66571.610967]     SOFTIRQ-ON-W at:
Nov 17 14:07:39 thoregon kernel: [66571.610968]
[<ffffffff8107f0c7>] __lock_acquire+0x667/0x1c00
Nov 17 14:07:39 thoregon kernel: [66571.610970]
[<ffffffff81080b55>] lock_acquire+0x55/0x70
Nov 17 14:07:39 thoregon kernel: [66571.610972]
[<ffffffff8106126a>] down_write_nested+0x4a/0x70
Nov 17 14:07:39 thoregon kernel: [66571.610974]
[<ffffffff811e8164>] xfs_ilock+0x84/0xb0
Nov 17 14:07:39 thoregon kernel: [66571.610976]
[<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
Nov 17 14:07:39 thoregon kernel: [66571.610977]
[<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.610979]
[<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
Nov 17 14:07:39 thoregon kernel: [66571.610981]
[<ffffffff81100322>] vfs_create+0x72/0xc0
Nov 17 14:07:39 thoregon kernel: [66571.610983]
[<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
Nov 17 14:07:39 thoregon kernel: [66571.610985]
[<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
Nov 17 14:07:39 thoregon kernel: [66571.610987]
[<ffffffff8110183d>] do_filp_open+0x3d/0xa0
Nov 17 14:07:39 thoregon kernel: [66571.610989]
[<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
Nov 17 14:07:39 thoregon kernel: [66571.610991]
[<ffffffff810f222c>] sys_open+0x1c/0x20
Nov 17 14:07:39 thoregon kernel: [66571.610993]
[<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
Nov 17 14:07:39 thoregon kernel: [66571.610995]     RECLAIM_FS-ON-W at:
Nov 17 14:07:39 thoregon kernel: [66571.610996]
  [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
Nov 17 14:07:39 thoregon kernel: [66571.610998]
  [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
Nov 17 14:07:39 thoregon kernel: [66571.610999]
  [<ffffffff810e9dc5>] kmem_cache_alloc+0x35/0xe0
Nov 17 14:07:39 thoregon kernel: [66571.611002]
  [<ffffffff810dba21>] vm_map_ram+0x271/0x770
Nov 17 14:07:39 thoregon kernel: [66571.611004]
  [<ffffffff811e12b6>] _xfs_buf_map_pages+0x46/0xe0
Nov 17 14:07:39 thoregon kernel: [66571.611008]
  [<ffffffff811e21ca>] xfs_buf_get_map+0x8a/0x130
Nov 17 14:07:39 thoregon kernel: [66571.611009]
  [<ffffffff81233989>] xfs_trans_get_buf_map+0xa9/0xd0
Nov 17 14:07:39 thoregon kernel: [66571.611011]
  [<ffffffff8121bc2d>] xfs_ialloc_inode_init+0xcd/0x1d0
Nov 17 14:07:39 thoregon kernel: [66571.611015]
  [<ffffffff8121c16f>] xfs_ialloc_ag_alloc+0x18f/0x500
Nov 17 14:07:39 thoregon kernel: [66571.611017]
  [<ffffffff8121d955>] xfs_dialloc+0x185/0x2a0
Nov 17 14:07:39 thoregon kernel: [66571.611019]
  [<ffffffff8121f068>] xfs_ialloc+0x58/0x650
Nov 17 14:07:39 thoregon kernel: [66571.611021]
  [<ffffffff811f3995>] xfs_dir_ialloc+0x65/0x270
Nov 17 14:07:39 thoregon kernel: [66571.611023]
  [<ffffffff811f537c>] xfs_create+0x3ac/0x5a0
Nov 17 14:07:39 thoregon kernel: [66571.611024]
  [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.611026]
  [<ffffffff811ecb61>] xfs_vn_mkdir+0x11/0x20
Nov 17 14:07:39 thoregon kernel: [66571.611028]
  [<ffffffff8110016f>] vfs_mkdir+0x7f/0xd0
Nov 17 14:07:39 thoregon kernel: [66571.611030]
  [<ffffffff81101b83>] sys_mkdirat+0x43/0x80
Nov 17 14:07:39 thoregon kernel: [66571.611032]
  [<ffffffff81101bd4>] sys_mkdir+0x14/0x20
Nov 17 14:07:39 thoregon kernel: [66571.611034]
  [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
Nov 17 14:07:39 thoregon kernel: [66571.611036]     INITIAL USE at:
Nov 17 14:07:39 thoregon kernel: [66571.611037]
[<ffffffff8107ed49>] __lock_acquire+0x2e9/0x1c00
Nov 17 14:07:39 thoregon kernel: [66571.611038]
[<ffffffff81080b55>] lock_acquire+0x55/0x70
Nov 17 14:07:39 thoregon kernel: [66571.611040]
[<ffffffff8106126a>] down_write_nested+0x4a/0x70
Nov 17 14:07:39 thoregon kernel: [66571.611042]
[<ffffffff811e8164>] xfs_ilock+0x84/0xb0
Nov 17 14:07:39 thoregon kernel: [66571.611044]
[<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
Nov 17 14:07:39 thoregon kernel: [66571.611046]
[<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.611047]
[<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
Nov 17 14:07:39 thoregon kernel: [66571.611049]
[<ffffffff81100322>] vfs_create+0x72/0xc0
Nov 17 14:07:39 thoregon kernel: [66571.611051]
[<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
Nov 17 14:07:39 thoregon kernel: [66571.611053]
[<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
Nov 17 14:07:39 thoregon kernel: [66571.611055]
[<ffffffff8110183d>] do_filp_open+0x3d/0xa0
Nov 17 14:07:39 thoregon kernel: [66571.611057]
[<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
Nov 17 14:07:39 thoregon kernel: [66571.611059]
[<ffffffff810f222c>] sys_open+0x1c/0x20
Nov 17 14:07:39 thoregon kernel: [66571.611061]
[<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
Nov 17 14:07:39 thoregon kernel: [66571.611063]   }
Nov 17 14:07:39 thoregon kernel: [66571.611064]   ... key      at:
[<ffffffff825b4b81>] __key.41357+0x1/0x8
Nov 17 14:07:39 thoregon kernel: [66571.611066]   ... acquired at:
Nov 17 14:07:39 thoregon kernel: [66571.611067]
[<ffffffff81080b55>] lock_acquire+0x55/0x70
Nov 17 14:07:39 thoregon kernel: [66571.611069]
[<ffffffff8106126a>] down_write_nested+0x4a/0x70
Nov 17 14:07:39 thoregon kernel: [66571.611071]
[<ffffffff811e8164>] xfs_ilock+0x84/0xb0
Nov 17 14:07:39 thoregon kernel: [66571.611073]
[<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
Nov 17 14:07:39 thoregon kernel: [66571.611074]
[<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.611076]
[<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
Nov 17 14:07:39 thoregon kernel: [66571.611078]
[<ffffffff81100322>] vfs_create+0x72/0xc0
Nov 17 14:07:39 thoregon kernel: [66571.611080]
[<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
Nov 17 14:07:39 thoregon kernel: [66571.611082]
[<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
Nov 17 14:07:39 thoregon kernel: [66571.611084]
[<ffffffff8110183d>] do_filp_open+0x3d/0xa0
Nov 17 14:07:39 thoregon kernel: [66571.611086]
[<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
Nov 17 14:07:39 thoregon kernel: [66571.611088]
[<ffffffff810f222c>] sys_open+0x1c/0x20
Nov 17 14:07:39 thoregon kernel: [66571.611090]
[<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
Nov 17 14:07:39 thoregon kernel: [66571.611091]
Nov 17 14:07:39 thoregon kernel: [66571.611092] ->
(sb_internal){.+.+.?} ops: 1341531 {
Nov 17 14:07:39 thoregon kernel: [66571.611095]    HARDIRQ-ON-R at:
Nov 17 14:07:39 thoregon kernel: [66571.611096]
[<ffffffff8107ef6a>] __lock_acquire+0x50a/0x1c00
Nov 17 14:07:39 thoregon kernel: [66571.611098]
[<ffffffff81080b55>] lock_acquire+0x55/0x70
Nov 17 14:07:39 thoregon kernel: [66571.611100]
[<ffffffff810f451b>] __sb_start_write+0xab/0x190
Nov 17 14:07:39 thoregon kernel: [66571.611102]
[<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611104]
[<ffffffff811f5157>] xfs_create+0x187/0x5a0
Nov 17 14:07:39 thoregon kernel: [66571.611105]
[<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.611107]
[<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
Nov 17 14:07:39 thoregon kernel: [66571.611109]
[<ffffffff81100322>] vfs_create+0x72/0xc0
Nov 17 14:07:39 thoregon kernel: [66571.611111]
[<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
Nov 17 14:07:39 thoregon kernel: [66571.611113]
[<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
Nov 17 14:07:39 thoregon kernel: [66571.611115]
[<ffffffff8110183d>] do_filp_open+0x3d/0xa0
Nov 17 14:07:39 thoregon kernel: [66571.611117]
[<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
Nov 17 14:07:39 thoregon kernel: [66571.611119]
[<ffffffff810f222c>] sys_open+0x1c/0x20
Nov 17 14:07:39 thoregon kernel: [66571.611121]
[<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
Nov 17 14:07:39 thoregon kernel: [66571.611123]    SOFTIRQ-ON-R at:
Nov 17 14:07:39 thoregon kernel: [66571.611124]
[<ffffffff8107f0c7>] __lock_acquire+0x667/0x1c00
Nov 17 14:07:39 thoregon kernel: [66571.611126]
[<ffffffff81080b55>] lock_acquire+0x55/0x70
Nov 17 14:07:39 thoregon kernel: [66571.611128]
[<ffffffff810f451b>] __sb_start_write+0xab/0x190
Nov 17 14:07:39 thoregon kernel: [66571.611130]
[<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611132]
[<ffffffff811f5157>] xfs_create+0x187/0x5a0
Nov 17 14:07:39 thoregon kernel: [66571.611133]
[<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.611135]
[<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
Nov 17 14:07:39 thoregon kernel: [66571.611137]
[<ffffffff81100322>] vfs_create+0x72/0xc0
Nov 17 14:07:39 thoregon kernel: [66571.611139]
[<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
Nov 17 14:07:39 thoregon kernel: [66571.611141]
[<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
Nov 17 14:07:39 thoregon kernel: [66571.611143]
[<ffffffff8110183d>] do_filp_open+0x3d/0xa0
Nov 17 14:07:39 thoregon kernel: [66571.611145]
[<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
Nov 17 14:07:39 thoregon kernel: [66571.611147]
[<ffffffff810f222c>] sys_open+0x1c/0x20
Nov 17 14:07:39 thoregon kernel: [66571.611149]
[<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
Nov 17 14:07:39 thoregon kernel: [66571.611151]    IN-RECLAIM_FS-R at:
Nov 17 14:07:39 thoregon kernel: [66571.611152]
[<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
Nov 17 14:07:39 thoregon kernel: [66571.611154]
[<ffffffff81080b55>] lock_acquire+0x55/0x70
Nov 17 14:07:39 thoregon kernel: [66571.611156]
[<ffffffff810f451b>] __sb_start_write+0xab/0x190
Nov 17 14:07:39 thoregon kernel: [66571.611158]
[<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611159]
[<ffffffff811f3e64>] xfs_free_eofblocks+0x104/0x250
Nov 17 14:07:39 thoregon kernel: [66571.611161]
[<ffffffff811f4b39>] xfs_inactive+0xa9/0x480
Nov 17 14:07:39 thoregon kernel: [66571.611163]
[<ffffffff811efe10>] xfs_fs_evict_inode+0x70/0x80
Nov 17 14:07:39 thoregon kernel: [66571.611165]
[<ffffffff8110cb7f>] evict+0xaf/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.611168]
[<ffffffff8110d209>] dispose_list+0x39/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611170]
[<ffffffff8110dc63>] prune_icache_sb+0x183/0x340
Nov 17 14:07:39 thoregon kernel: [66571.611172]
[<ffffffff810f5bc3>] prune_super+0xf3/0x1a0
Nov 17 14:07:39 thoregon kernel: [66571.611173]
[<ffffffff810bbdee>] shrink_slab+0x11e/0x1f0
Nov 17 14:07:39 thoregon kernel: [66571.611175]
[<ffffffff810be98f>] try_to_free_pages+0x21f/0x4e0
Nov 17 14:07:39 thoregon kernel: [66571.611177]
[<ffffffff810b5ec6>] __alloc_pages_nodemask+0x506/0x800
Nov 17 14:07:39 thoregon kernel: [66571.611179]
[<ffffffff810ce56e>] handle_pte_fault+0x5ae/0x7a0
Nov 17 14:07:39 thoregon kernel: [66571.611182]
[<ffffffff810cf769>] handle_mm_fault+0x1f9/0x2a0
Nov 17 14:07:39 thoregon kernel: [66571.611184]
[<ffffffff81029dfc>] __do_page_fault+0x16c/0x480
Nov 17 14:07:39 thoregon kernel: [66571.611186]
[<ffffffff8102a139>] do_page_fault+0x9/0x10
Nov 17 14:07:39 thoregon kernel: [66571.611188]
[<ffffffff816b287f>] page_fault+0x1f/0x30
Nov 17 14:07:39 thoregon kernel: [66571.611190]    RECLAIM_FS-ON-R at:
Nov 17 14:07:39 thoregon kernel: [66571.611191]
[<ffffffff8108137e>] mark_held_locks+0x7e/0x130
Nov 17 14:07:39 thoregon kernel: [66571.611193]
[<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
Nov 17 14:07:39 thoregon kernel: [66571.611195]
[<ffffffff810e9dc5>] kmem_cache_alloc+0x35/0xe0
Nov 17 14:07:39 thoregon kernel: [66571.611197]
[<ffffffff811f6d4f>] kmem_zone_alloc+0x5f/0xe0
Nov 17 14:07:39 thoregon kernel: [66571.611198]
[<ffffffff811f6de8>] kmem_zone_zalloc+0x18/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611200]
[<ffffffff8122b0b2>] _xfs_trans_alloc+0x32/0x90
Nov 17 14:07:39 thoregon kernel: [66571.611202]
[<ffffffff8122b148>] xfs_trans_alloc+0x38/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611204]
[<ffffffff811f5157>] xfs_create+0x187/0x5a0
Nov 17 14:07:39 thoregon kernel: [66571.611205]
[<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.611207]
[<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
Nov 17 14:07:39 thoregon kernel: [66571.611209]
[<ffffffff81100322>] vfs_create+0x72/0xc0
Nov 17 14:07:39 thoregon kernel: [66571.611211]
[<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
Nov 17 14:07:39 thoregon kernel: [66571.611213]
[<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
Nov 17 14:07:39 thoregon kernel: [66571.611215]
[<ffffffff8110183d>] do_filp_open+0x3d/0xa0
Nov 17 14:07:39 thoregon kernel: [66571.611217]
[<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
Nov 17 14:07:39 thoregon kernel: [66571.611219]
[<ffffffff810f222c>] sys_open+0x1c/0x20
Nov 17 14:07:39 thoregon kernel: [66571.611221]
[<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
Nov 17 14:07:39 thoregon kernel: [66571.611223]    INITIAL USE at:
Nov 17 14:07:39 thoregon kernel: [66571.611224]
[<ffffffff8107ed49>] __lock_acquire+0x2e9/0x1c00
Nov 17 14:07:39 thoregon kernel: [66571.611225]
[<ffffffff81080b55>] lock_acquire+0x55/0x70
Nov 17 14:07:39 thoregon kernel: [66571.611227]
[<ffffffff810f451b>] __sb_start_write+0xab/0x190
Nov 17 14:07:39 thoregon kernel: [66571.611229]
[<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611231]
[<ffffffff811f5157>] xfs_create+0x187/0x5a0
Nov 17 14:07:39 thoregon kernel: [66571.611232]
[<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.611234]
[<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
Nov 17 14:07:39 thoregon kernel: [66571.611236]
[<ffffffff81100322>] vfs_create+0x72/0xc0
Nov 17 14:07:39 thoregon kernel: [66571.611238]
[<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
Nov 17 14:07:39 thoregon kernel: [66571.611240]
[<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
Nov 17 14:07:39 thoregon kernel: [66571.611242]
[<ffffffff8110183d>] do_filp_open+0x3d/0xa0
Nov 17 14:07:39 thoregon kernel: [66571.611244]
[<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
Nov 17 14:07:39 thoregon kernel: [66571.611246]
[<ffffffff810f222c>] sys_open+0x1c/0x20
Nov 17 14:07:39 thoregon kernel: [66571.611248]
[<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
Nov 17 14:07:39 thoregon kernel: [66571.611250]  }
Nov 17 14:07:39 thoregon kernel: [66571.611251]  ... key      at:
[<ffffffff81c34e40>] xfs_fs_type+0x60/0x80
Nov 17 14:07:39 thoregon kernel: [66571.611254]  ... acquired at:
Nov 17 14:07:39 thoregon kernel: [66571.611254]
[<ffffffff8107df3b>] check_usage_forwards+0x10b/0x140
Nov 17 14:07:39 thoregon kernel: [66571.611256]
[<ffffffff8107e900>] mark_lock+0x190/0x2f0
Nov 17 14:07:39 thoregon kernel: [66571.611258]
[<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
Nov 17 14:07:39 thoregon kernel: [66571.611260]
[<ffffffff81080b55>] lock_acquire+0x55/0x70
Nov 17 14:07:39 thoregon kernel: [66571.611261]
[<ffffffff810f451b>] __sb_start_write+0xab/0x190
Nov 17 14:07:39 thoregon kernel: [66571.611263]
[<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611265]
[<ffffffff811f3e64>] xfs_free_eofblocks+0x104/0x250
Nov 17 14:07:39 thoregon kernel: [66571.611266]
[<ffffffff811f4b39>] xfs_inactive+0xa9/0x480
Nov 17 14:07:39 thoregon kernel: [66571.611268]
[<ffffffff811efe10>] xfs_fs_evict_inode+0x70/0x80
Nov 17 14:07:39 thoregon kernel: [66571.611270]
[<ffffffff8110cb7f>] evict+0xaf/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.611271]
[<ffffffff8110d209>] dispose_list+0x39/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611273]
[<ffffffff8110dc63>] prune_icache_sb+0x183/0x340
Nov 17 14:07:39 thoregon kernel: [66571.611275]
[<ffffffff810f5bc3>] prune_super+0xf3/0x1a0
Nov 17 14:07:39 thoregon kernel: [66571.611277]
[<ffffffff810bbdee>] shrink_slab+0x11e/0x1f0
Nov 17 14:07:39 thoregon kernel: [66571.611278]
[<ffffffff810be98f>] try_to_free_pages+0x21f/0x4e0
Nov 17 14:07:39 thoregon kernel: [66571.611280]
[<ffffffff810b5ec6>] __alloc_pages_nodemask+0x506/0x800
Nov 17 14:07:39 thoregon kernel: [66571.611282]
[<ffffffff810ce56e>] handle_pte_fault+0x5ae/0x7a0
Nov 17 14:07:39 thoregon kernel: [66571.611284]
[<ffffffff810cf769>] handle_mm_fault+0x1f9/0x2a0
Nov 17 14:07:39 thoregon kernel: [66571.611286]
[<ffffffff81029dfc>] __do_page_fault+0x16c/0x480
Nov 17 14:07:39 thoregon kernel: [66571.611287]
[<ffffffff8102a139>] do_page_fault+0x9/0x10
Nov 17 14:07:39 thoregon kernel: [66571.611289]
[<ffffffff816b287f>] page_fault+0x1f/0x30
Nov 17 14:07:39 thoregon kernel: [66571.611291]
Nov 17 14:07:39 thoregon kernel: [66571.611292]
Nov 17 14:07:39 thoregon kernel: [66571.611292] stack backtrace:
Nov 17 14:07:39 thoregon kernel: [66571.611294] Pid: 21330, comm: cc1
Not tainted 3.7.0-rc5 #1
Nov 17 14:07:39 thoregon kernel: [66571.611295] Call Trace:
Nov 17 14:07:39 thoregon kernel: [66571.611298]  [<ffffffff8107de28>]
print_irq_inversion_bug.part.37+0x1e8/0x1f0
Nov 17 14:07:39 thoregon kernel: [66571.611300]  [<ffffffff8107df3b>]
check_usage_forwards+0x10b/0x140
Nov 17 14:07:39 thoregon kernel: [66571.611303]  [<ffffffff8107e900>]
mark_lock+0x190/0x2f0
Nov 17 14:07:39 thoregon kernel: [66571.611306]  [<ffffffff8150406e>]
? dm_request+0x2e/0x2a0
Nov 17 14:07:39 thoregon kernel: [66571.611308]  [<ffffffff8107de30>]
? print_irq_inversion_bug.part.37+0x1f0/0x1f0
Nov 17 14:07:39 thoregon kernel: [66571.611310]  [<ffffffff8107efdf>]
__lock_acquire+0x57f/0x1c00
Nov 17 14:07:39 thoregon kernel: [66571.611313]  [<ffffffff812202f4>]
? xfs_iext_bno_to_ext+0x84/0x160
Nov 17 14:07:39 thoregon kernel: [66571.611316]  [<ffffffff8120a023>]
? xfs_bmbt_get_all+0x13/0x20
Nov 17 14:07:39 thoregon kernel: [66571.611318]  [<ffffffff81200fb4>]
? xfs_bmap_search_multi_extents+0xa4/0x110
Nov 17 14:07:39 thoregon kernel: [66571.611320]  [<ffffffff81080b55>]
lock_acquire+0x55/0x70
Nov 17 14:07:39 thoregon kernel: [66571.611322]  [<ffffffff8122b138>]
? xfs_trans_alloc+0x28/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611324]  [<ffffffff810f451b>]
__sb_start_write+0xab/0x190
Nov 17 14:07:39 thoregon kernel: [66571.611326]  [<ffffffff8122b138>]
? xfs_trans_alloc+0x28/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611328]  [<ffffffff8122b138>]
? xfs_trans_alloc+0x28/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611330]  [<ffffffff8122b138>]
xfs_trans_alloc+0x28/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611332]  [<ffffffff811f3e64>]
xfs_free_eofblocks+0x104/0x250
Nov 17 14:07:39 thoregon kernel: [66571.611334]  [<ffffffff816b204b>]
? _raw_spin_unlock_irq+0x2b/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611336]  [<ffffffff811f4b39>]
xfs_inactive+0xa9/0x480
Nov 17 14:07:39 thoregon kernel: [66571.611337]  [<ffffffff816b204b>]
? _raw_spin_unlock_irq+0x2b/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611340]  [<ffffffff811efe10>]
xfs_fs_evict_inode+0x70/0x80
Nov 17 14:07:39 thoregon kernel: [66571.611342]  [<ffffffff8110cb7f>]
evict+0xaf/0x1b0
Nov 17 14:07:39 thoregon kernel: [66571.611344]  [<ffffffff8110d209>]
dispose_list+0x39/0x50
Nov 17 14:07:39 thoregon kernel: [66571.611346]  [<ffffffff8110dc63>]
prune_icache_sb+0x183/0x340
Nov 17 14:07:39 thoregon kernel: [66571.611347]  [<ffffffff810f5bc3>]
prune_super+0xf3/0x1a0
Nov 17 14:07:39 thoregon kernel: [66571.611349]  [<ffffffff810bbdee>]
shrink_slab+0x11e/0x1f0
Nov 17 14:07:39 thoregon kernel: [66571.611352]  [<ffffffff810be98f>]
try_to_free_pages+0x21f/0x4e0
Nov 17 14:07:39 thoregon kernel: [66571.611354]  [<ffffffff810b5ec6>]
__alloc_pages_nodemask+0x506/0x800
Nov 17 14:07:39 thoregon kernel: [66571.611356]  [<ffffffff810b9e40>]
? lru_deactivate_fn+0x1c0/0x1c0
Nov 17 14:07:39 thoregon kernel: [66571.611358]  [<ffffffff810ce56e>]
handle_pte_fault+0x5ae/0x7a0
Nov 17 14:07:39 thoregon kernel: [66571.611360]  [<ffffffff810cf769>]
handle_mm_fault+0x1f9/0x2a0
Nov 17 14:07:39 thoregon kernel: [66571.611363]  [<ffffffff81029dfc>]
__do_page_fault+0x16c/0x480
Nov 17 14:07:39 thoregon kernel: [66571.611366]  [<ffffffff8129c7ad>]
? trace_hardirqs_off_thunk+0x3a/0x3c
Nov 17 14:07:39 thoregon kernel: [66571.611368]  [<ffffffff8102a139>]
do_page_fault+0x9/0x10
Nov 17 14:07:39 thoregon kernel: [66571.611370]  [<ffffffff816b287f>]
page_fault+0x1f/0x30

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
  2012-11-18 10:24       ` Torsten Kaiser
@ 2012-11-18 15:29         ` Torsten Kaiser
  -1 siblings, 0 replies; 31+ messages in thread
From: Torsten Kaiser @ 2012-11-18 15:29 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs, Linux Kernel

On Sun, Nov 18, 2012 at 11:24 AM, Torsten Kaiser
<just.for.lkml@googlemail.com> wrote:
> On Tue, Oct 30, 2012 at 9:37 PM, Torsten Kaiser
> <just.for.lkml@googlemail.com> wrote:
>> I will keep LOCKDEP enabled on that system, and if there really is
>> another splat, I will report back here. But I rather doubt that this
>> will be needed.
>
> After the patch, I did not see this problem again, but today I found
> another LOCKDEP report that also looks XFS related.
> I found it twice in the logs, and as both were slightly different, I
> will attach both versions.

> Nov  6 21:57:09 thoregon kernel: [ 9941.104353] 3.7.0-rc4 #1 Not tainted
> Nov  6 21:57:09 thoregon kernel: [ 9941.104355] inconsistent
> {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
> Nov  6 21:57:09 thoregon kernel: [ 9941.104430]        CPU0
> Nov  6 21:57:09 thoregon kernel: [ 9941.104431]        ----
> Nov  6 21:57:09 thoregon kernel: [ 9941.104432]   lock(&(&ip->i_lock)->mr_lock);
> Nov  6 21:57:09 thoregon kernel: [ 9941.104433]   <Interrupt>
> Nov  6 21:57:09 thoregon kernel: [ 9941.104434]
> lock(&(&ip->i_lock)->mr_lock);
> Nov  6 21:57:09 thoregon kernel: [ 9941.104435]
> Nov  6 21:57:09 thoregon kernel: [ 9941.104435]  *** DEADLOCK ***

Sorry! Copied the wrong report. Your fix only landed in -rc5, so my
vanilla -rc4 did (also) report the old problem again.
And I copy&pasted that report instead of the second appearance of the
new problem.

Here is the correct second report of the sb_internal vs
ip->i_lock->mr_lock problem:
[110926.972477]
[110926.972482] =========================================================
[110926.972484] [ INFO: possible irq lock inversion dependency detected ]
[110926.972486] 3.7.0-rc4 #1 Not tainted
[110926.972487] ---------------------------------------------------------
[110926.972489] kswapd0/725 just changed the state of lock:
[110926.972490]  (sb_internal){.+.+.?}, at: [<ffffffff8122b268>]
xfs_trans_alloc+0x28/0x50
[110926.972499] but this lock took another, RECLAIM_FS-unsafe lock in the past:
[110926.972500]  (&(&ip->i_lock)->mr_lock/1){+.+.+.}
[110926.972500]
[110926.972500] and interrupts could create inverse lock ordering between them.
[110926.972500]
[110926.972503]
[110926.972503] other info that might help us debug this:
[110926.972504]  Possible interrupt unsafe locking scenario:
[110926.972504]
[110926.972505]        CPU0                    CPU1
[110926.972506]        ----                    ----
[110926.972507]   lock(&(&ip->i_lock)->mr_lock/1);
[110926.972509]                                local_irq_disable();
[110926.972509]                                lock(sb_internal);
[110926.972511]                                lock(&(&ip->i_lock)->mr_lock/1);
[110926.972512]   <Interrupt>
[110926.972513]     lock(sb_internal);
[110926.972514]
[110926.972514]  *** DEADLOCK ***
[110926.972514]
[110926.972516] 2 locks held by kswapd0/725:
[110926.972517]  #0:  (shrinker_rwsem){++++..}, at:
[<ffffffff810bbd22>] shrink_slab+0x32/0x1f0
[110926.972522]  #1:  (&type->s_umount_key#20){++++.+}, at:
[<ffffffff810f5a8e>] grab_super_passive+0x3e/0x90
[110926.972527]
[110926.972527] the shortest dependencies between 2nd lock and 1st lock:
[110926.972533]  -> (&(&ip->i_lock)->mr_lock/1){+.+.+.} ops: 58117 {
[110926.972536]     HARDIRQ-ON-W at:
[110926.972537]                       [<ffffffff8107f091>]
__lock_acquire+0x631/0x1c00
[110926.972540]                       [<ffffffff81080b55>]
lock_acquire+0x55/0x70
[110926.972542]                       [<ffffffff8106126a>]
down_write_nested+0x4a/0x70
[110926.972545]                       [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[110926.972548]                       [<ffffffff811f5194>]
xfs_create+0x1d4/0x5a0
[110926.972550]                       [<ffffffff811eca1a>]
xfs_vn_mknod+0x8a/0x1b0
[110926.972552]                       [<ffffffff811ecb6e>]
xfs_vn_create+0xe/0x10
[110926.972554]                       [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972556]                       [<ffffffff81100b8e>]
do_last.isra.69+0x80e/0xc80
[110926.972558]                       [<ffffffff811010ab>]
path_openat.isra.70+0xab/0x490
[110926.972560]                       [<ffffffff8110184d>]
do_filp_open+0x3d/0xa0
[110926.972562]                       [<ffffffff810f2139>]
do_sys_open+0xf9/0x1e0
[110926.972565]                       [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972567]                       [<ffffffff816b2d12>]
system_call_fastpath+0x16/0x1b
[110926.972570]     SOFTIRQ-ON-W at:
[110926.972571]                       [<ffffffff8107f0c7>]
__lock_acquire+0x667/0x1c00
[110926.972573]                       [<ffffffff81080b55>]
lock_acquire+0x55/0x70
[110926.972574]                       [<ffffffff8106126a>]
down_write_nested+0x4a/0x70
[110926.972576]                       [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[110926.972578]                       [<ffffffff811f5194>]
xfs_create+0x1d4/0x5a0
[110926.972580]                       [<ffffffff811eca1a>]
xfs_vn_mknod+0x8a/0x1b0
[110926.972581]                       [<ffffffff811ecb6e>]
xfs_vn_create+0xe/0x10
[110926.972583]                       [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972585]                       [<ffffffff81100b8e>]
do_last.isra.69+0x80e/0xc80
[110926.972587]                       [<ffffffff811010ab>]
path_openat.isra.70+0xab/0x490
[110926.972589]                       [<ffffffff8110184d>]
do_filp_open+0x3d/0xa0
[110926.972591]                       [<ffffffff810f2139>]
do_sys_open+0xf9/0x1e0
[110926.972593]                       [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972595]                       [<ffffffff816b2d12>]
system_call_fastpath+0x16/0x1b
[110926.972597]     RECLAIM_FS-ON-W at:
[110926.972598]                          [<ffffffff8108137e>]
mark_held_locks+0x7e/0x130
[110926.972600]                          [<ffffffff81081a63>]
lockdep_trace_alloc+0x63/0xc0
[110926.972601]                          [<ffffffff810e9dd5>]
kmem_cache_alloc+0x35/0xe0
[110926.972603]                          [<ffffffff810dba31>]
vm_map_ram+0x271/0x770
[110926.972606]                          [<ffffffff811e1316>]
_xfs_buf_map_pages+0x46/0xe0
[110926.972609]                          [<ffffffff811e222a>]
xfs_buf_get_map+0x8a/0x130
[110926.972610]                          [<ffffffff81233ab9>]
xfs_trans_get_buf_map+0xa9/0xd0
[110926.972613]                          [<ffffffff8121bced>]
xfs_ialloc_inode_init+0xcd/0x1d0
[110926.972616]                          [<ffffffff8121c25e>]
xfs_ialloc_ag_alloc+0x1be/0x560
[110926.972618]                          [<ffffffff8121da65>]
xfs_dialloc+0x185/0x2a0
[110926.972619]                          [<ffffffff8121f198>]
xfs_ialloc+0x58/0x650
[110926.972621]                          [<ffffffff811f3985>]
xfs_dir_ialloc+0x65/0x270
[110926.972623]                          [<ffffffff811f536c>]
xfs_create+0x3ac/0x5a0
[110926.972624]                          [<ffffffff811eca1a>]
xfs_vn_mknod+0x8a/0x1b0
[110926.972626]                          [<ffffffff811ecb6e>]
xfs_vn_create+0xe/0x10
[110926.972628]                          [<ffffffff81100332>]
vfs_create+0x72/0xc0
[110926.972630]                          [<ffffffff81100b8e>]
do_last.isra.69+0x80e/0xc80
[110926.972632]                          [<ffffffff811010ab>]
path_openat.isra.70+0xab/0x490
[110926.972634]                          [<ffffffff8110184d>]
do_filp_open+0x3d/0xa0
[110926.972636]                          [<ffffffff810f2139>]
do_sys_open+0xf9/0x1e0
[110926.972638]                          [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972640]                          [<ffffffff816b2d12>]
system_call_fastpath+0x16/0x1b
[110926.972642]     INITIAL USE at:
[110926.972642]                      [<ffffffff8107ed49>]
__lock_acquire+0x2e9/0x1c00
[110926.972644]                      [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972646]                      [<ffffffff8106126a>]
down_write_nested+0x4a/0x70
[110926.972648]                      [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[110926.972650]                      [<ffffffff811f5194>] xfs_create+0x1d4/0x5a0
[110926.972651]                      [<ffffffff811eca1a>]
xfs_vn_mknod+0x8a/0x1b0
[110926.972653]                      [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972655]                      [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972657]                      [<ffffffff81100b8e>]
do_last.isra.69+0x80e/0xc80
[110926.972659]                      [<ffffffff811010ab>]
path_openat.isra.70+0xab/0x490
[110926.972661]                      [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972663]                      [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972664]                      [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972666]                      [<ffffffff816b2d12>]
system_call_fastpath+0x16/0x1b
[110926.972668]   }
[110926.972669]   ... key      at: [<ffffffff825b4b81>] __key.41355+0x1/0x8
[110926.972672]   ... acquired at:
[110926.972672]    [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972674]    [<ffffffff8106126a>] down_write_nested+0x4a/0x70
[110926.972676]    [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[110926.972678]    [<ffffffff811f5194>] xfs_create+0x1d4/0x5a0
[110926.972679]    [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972681]    [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972683]    [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972684]    [<ffffffff81100b8e>] do_last.isra.69+0x80e/0xc80
[110926.972686]    [<ffffffff811010ab>] path_openat.isra.70+0xab/0x490
[110926.972688]    [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972690]    [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972692]    [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972694]    [<ffffffff816b2d12>] system_call_fastpath+0x16/0x1b
[110926.972696]
[110926.972696] -> (sb_internal){.+.+.?} ops: 1710064 {
[110926.972699]    HARDIRQ-ON-R at:
[110926.972700]                     [<ffffffff8107ef6a>]
__lock_acquire+0x50a/0x1c00
[110926.972702]                     [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972704]                     [<ffffffff810f452b>]
__sb_start_write+0xab/0x190
[110926.972705]                     [<ffffffff8122b268>]
xfs_trans_alloc+0x28/0x50
[110926.972707]                     [<ffffffff811f5147>] xfs_create+0x187/0x5a0
[110926.972709]                     [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972711]                     [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972712]                     [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972714]                     [<ffffffff81100b8e>]
do_last.isra.69+0x80e/0xc80
[110926.972716]                     [<ffffffff811010ab>]
path_openat.isra.70+0xab/0x490
[110926.972718]                     [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972720]                     [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972722]                     [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972724]                     [<ffffffff816b2d12>]
system_call_fastpath+0x16/0x1b
[110926.972726]    SOFTIRQ-ON-R at:
[110926.972727]                     [<ffffffff8107f0c7>]
__lock_acquire+0x667/0x1c00
[110926.972728]                     [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972730]                     [<ffffffff810f452b>]
__sb_start_write+0xab/0x190
[110926.972732]                     [<ffffffff8122b268>]
xfs_trans_alloc+0x28/0x50
[110926.972734]                     [<ffffffff811f5147>] xfs_create+0x187/0x5a0
[110926.972735]                     [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972737]                     [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972739]                     [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972741]                     [<ffffffff81100b8e>]
do_last.isra.69+0x80e/0xc80
[110926.972743]                     [<ffffffff811010ab>]
path_openat.isra.70+0xab/0x490
[110926.972745]                     [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972747]                     [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972749]                     [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972750]                     [<ffffffff816b2d12>]
system_call_fastpath+0x16/0x1b
[110926.972752]    IN-RECLAIM_FS-R at:
[110926.972753]                        [<ffffffff8107efdf>]
__lock_acquire+0x57f/0x1c00
[110926.972755]                        [<ffffffff81080b55>]
lock_acquire+0x55/0x70
[110926.972757]                        [<ffffffff810f452b>]
__sb_start_write+0xab/0x190
[110926.972758]                        [<ffffffff8122b268>]
xfs_trans_alloc+0x28/0x50
[110926.972760]                        [<ffffffff811f3e54>]
xfs_free_eofblocks+0x104/0x250
[110926.972762]                        [<ffffffff811f4b29>]
xfs_inactive+0xa9/0x480
[110926.972763]                        [<ffffffff811efe00>]
xfs_fs_evict_inode+0x70/0x80
[110926.972765]                        [<ffffffff8110cb8f>] evict+0xaf/0x1b0
[110926.972768]                        [<ffffffff8110d219>]
dispose_list+0x39/0x50
[110926.972770]                        [<ffffffff8110dc73>]
prune_icache_sb+0x183/0x340
[110926.972772]                        [<ffffffff810f5bd3>]
prune_super+0xf3/0x1a0
[110926.972773]                        [<ffffffff810bbe0e>]
shrink_slab+0x11e/0x1f0
[110926.972775]                        [<ffffffff810be400>] kswapd+0x690/0xa10
[110926.972777]                        [<ffffffff8105c246>] kthread+0xd6/0xe0
[110926.972779]                        [<ffffffff816b2c6c>]
ret_from_fork+0x7c/0xb0
[110926.972781]    RECLAIM_FS-ON-R at:
[110926.972782]                        [<ffffffff8108137e>]
mark_held_locks+0x7e/0x130
[110926.972784]                        [<ffffffff81081a63>]
lockdep_trace_alloc+0x63/0xc0
[110926.972785]                        [<ffffffff810e9dd5>]
kmem_cache_alloc+0x35/0xe0
[110926.972787]                        [<ffffffff811f6d3f>]
kmem_zone_alloc+0x5f/0xe0
[110926.972789]                        [<ffffffff811f6dd8>]
kmem_zone_zalloc+0x18/0x50
[110926.972790]                        [<ffffffff8122b1e2>]
_xfs_trans_alloc+0x32/0x90
[110926.972792]                        [<ffffffff8122b278>]
xfs_trans_alloc+0x38/0x50
[110926.972794]                        [<ffffffff811f5147>]
xfs_create+0x187/0x5a0
[110926.972796]                        [<ffffffff811eca1a>]
xfs_vn_mknod+0x8a/0x1b0
[110926.972797]                        [<ffffffff811ecb6e>]
xfs_vn_create+0xe/0x10
[110926.972799]                        [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972801]                        [<ffffffff81100b8e>]
do_last.isra.69+0x80e/0xc80
[110926.972803]                        [<ffffffff811010ab>]
path_openat.isra.70+0xab/0x490
[110926.972805]                        [<ffffffff8110184d>]
do_filp_open+0x3d/0xa0
[110926.972807]                        [<ffffffff810f2139>]
do_sys_open+0xf9/0x1e0
[110926.972809]                        [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972811]                        [<ffffffff816b2d12>]
system_call_fastpath+0x16/0x1b
[110926.972813]    INITIAL USE at:
[110926.972814]                    [<ffffffff8107ed49>]
__lock_acquire+0x2e9/0x1c00
[110926.972815]                    [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972817]                    [<ffffffff810f452b>]
__sb_start_write+0xab/0x190
[110926.972819]                    [<ffffffff8122b268>]
xfs_trans_alloc+0x28/0x50
[110926.972820]                    [<ffffffff811f5147>] xfs_create+0x187/0x5a0
[110926.972822]                    [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972824]                    [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972826]                    [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972827]                    [<ffffffff81100b8e>]
do_last.isra.69+0x80e/0xc80
[110926.972829]                    [<ffffffff811010ab>]
path_openat.isra.70+0xab/0x490
[110926.972831]                    [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972833]                    [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972835]                    [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972837]                    [<ffffffff816b2d12>]
system_call_fastpath+0x16/0x1b
[110926.972839]  }
[110926.972840]  ... key      at: [<ffffffff81c34e40>] xfs_fs_type+0x60/0x80
[110926.972842]  ... acquired at:
[110926.972843]    [<ffffffff8107df3b>] check_usage_forwards+0x10b/0x140
[110926.972845]    [<ffffffff8107e900>] mark_lock+0x190/0x2f0
[110926.972846]    [<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
[110926.972848]    [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972850]    [<ffffffff810f452b>] __sb_start_write+0xab/0x190
[110926.972851]    [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
[110926.972853]    [<ffffffff811f3e54>] xfs_free_eofblocks+0x104/0x250
[110926.972855]    [<ffffffff811f4b29>] xfs_inactive+0xa9/0x480
[110926.972856]    [<ffffffff811efe00>] xfs_fs_evict_inode+0x70/0x80
[110926.972858]    [<ffffffff8110cb8f>] evict+0xaf/0x1b0
[110926.972860]    [<ffffffff8110d219>] dispose_list+0x39/0x50
[110926.972861]    [<ffffffff8110dc73>] prune_icache_sb+0x183/0x340
[110926.972863]    [<ffffffff810f5bd3>] prune_super+0xf3/0x1a0
[110926.972865]    [<ffffffff810bbe0e>] shrink_slab+0x11e/0x1f0
[110926.972866]    [<ffffffff810be400>] kswapd+0x690/0xa10
[110926.972868]    [<ffffffff8105c246>] kthread+0xd6/0xe0
[110926.972870]    [<ffffffff816b2c6c>] ret_from_fork+0x7c/0xb0
[110926.972871]
[110926.972872]
[110926.972872] stack backtrace:
[110926.972874] Pid: 725, comm: kswapd0 Not tainted 3.7.0-rc4 #1
[110926.972875] Call Trace:
[110926.972878]  [<ffffffff8107de28>]
print_irq_inversion_bug.part.37+0x1e8/0x1f0
[110926.972880]  [<ffffffff8107df3b>] check_usage_forwards+0x10b/0x140
[110926.972883]  [<ffffffff8107e900>] mark_lock+0x190/0x2f0
[110926.972885]  [<ffffffff8107de30>] ?
print_irq_inversion_bug.part.37+0x1f0/0x1f0
[110926.972887]  [<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
[110926.972889]  [<ffffffff81220424>] ? xfs_iext_bno_to_ext+0x84/0x160
[110926.972892]  [<ffffffff8120a0e3>] ? xfs_bmbt_get_all+0x13/0x20
[110926.972895]  [<ffffffff81201104>] ? xfs_bmap_search_multi_extents+0xa4/0x110
[110926.972897]  [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972899]  [<ffffffff8122b268>] ? xfs_trans_alloc+0x28/0x50
[110926.972901]  [<ffffffff810f452b>] __sb_start_write+0xab/0x190
[110926.972903]  [<ffffffff8122b268>] ? xfs_trans_alloc+0x28/0x50
[110926.972905]  [<ffffffff8122b268>] ? xfs_trans_alloc+0x28/0x50
[110926.972907]  [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
[110926.972908]  [<ffffffff811f3e54>] xfs_free_eofblocks+0x104/0x250
[110926.972910]  [<ffffffff816b1efb>] ? _raw_spin_unlock_irq+0x2b/0x50
[110926.972912]  [<ffffffff811f4b29>] xfs_inactive+0xa9/0x480
[110926.972914]  [<ffffffff816b1efb>] ? _raw_spin_unlock_irq+0x2b/0x50
[110926.972916]  [<ffffffff811efe00>] xfs_fs_evict_inode+0x70/0x80
[110926.972918]  [<ffffffff8110cb8f>] evict+0xaf/0x1b0
[110926.972920]  [<ffffffff8110d219>] dispose_list+0x39/0x50
[110926.972922]  [<ffffffff8110dc73>] prune_icache_sb+0x183/0x340
[110926.972924]  [<ffffffff810f5bd3>] prune_super+0xf3/0x1a0
[110926.972926]  [<ffffffff810bbe0e>] shrink_slab+0x11e/0x1f0
[110926.972928]  [<ffffffff810be400>] kswapd+0x690/0xa10
[110926.972930]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
[110926.972932]  [<ffffffff810bdd70>] ? shrink_lruvec+0x540/0x540
[110926.972934]  [<ffffffff8105c246>] kthread+0xd6/0xe0
[110926.972936]  [<ffffffff816b1efb>] ? _raw_spin_unlock_irq+0x2b/0x50
[110926.972938]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
[110926.972940]  [<ffffffff816b2c6c>] ret_from_fork+0x7c/0xb0
[110926.972942]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0

> Nov 17 14:07:38 thoregon kernel: [66571.610863]
> Nov 17 14:07:38 thoregon kernel: [66571.610869]
> =========================================================
> Nov 17 14:07:38 thoregon kernel: [66571.610870] [ INFO: possible irq
> lock inversion dependency detected ]
> Nov 17 14:07:38 thoregon kernel: [66571.610873] 3.7.0-rc5 #1 Not tainted
> Nov 17 14:07:38 thoregon kernel: [66571.610874]
> ---------------------------------------------------------
> Nov 17 14:07:38 thoregon kernel: [66571.610875] cc1/21330 just changed
> the state of lock:
> Nov 17 14:07:38 thoregon kernel: [66571.610877]
> (sb_internal){.+.+.?}, at: [<ffffffff8122b138>]
> xfs_trans_alloc+0x28/0x50
> Nov 17 14:07:38 thoregon kernel: [66571.610885] but this lock took
> another, RECLAIM_FS-unsafe lock in the past:
> Nov 17 14:07:38 thoregon kernel: [66571.610886]
> (&(&ip->i_lock)->mr_lock/1){+.+.+.}
> Nov 17 14:07:38 thoregon kernel: [66571.610886]
> Nov 17 14:07:39 thoregon kernel: [66571.610886] and interrupts could
> create inverse lock ordering between them.
> Nov 17 14:07:39 thoregon kernel: [66571.610886]
> Nov 17 14:07:39 thoregon kernel: [66571.610890]
> Nov 17 14:07:39 thoregon kernel: [66571.610890] other info that might
> help us debug this:
> Nov 17 14:07:39 thoregon kernel: [66571.610891]  Possible interrupt
> unsafe locking scenario:
> Nov 17 14:07:39 thoregon kernel: [66571.610891]
> Nov 17 14:07:39 thoregon kernel: [66571.610892]        CPU0
>         CPU1
> Nov 17 14:07:39 thoregon kernel: [66571.610893]        ----
>         ----
> Nov 17 14:07:39 thoregon kernel: [66571.610894]
> lock(&(&ip->i_lock)->mr_lock/1);
> Nov 17 14:07:39 thoregon kernel: [66571.610896]
>         local_irq_disable();
> Nov 17 14:07:39 thoregon kernel: [66571.610897]
>         lock(sb_internal);
> Nov 17 14:07:39 thoregon kernel: [66571.610898]
>         lock(&(&ip->i_lock)->mr_lock/1);
> Nov 17 14:07:39 thoregon kernel: [66571.610900]   <Interrupt>
> Nov 17 14:07:39 thoregon kernel: [66571.610901]     lock(sb_internal);
> Nov 17 14:07:39 thoregon kernel: [66571.610902]
> Nov 17 14:07:39 thoregon kernel: [66571.610902]  *** DEADLOCK ***
> Nov 17 14:07:39 thoregon kernel: [66571.610902]
> Nov 17 14:07:39 thoregon kernel: [66571.610904] 3 locks held by cc1/21330:
> Nov 17 14:07:39 thoregon kernel: [66571.610905]  #0:
> (&mm->mmap_sem){++++++}, at: [<ffffffff81029d8b>]
> __do_page_fault+0xfb/0x480
> Nov 17 14:07:39 thoregon kernel: [66571.610910]  #1:
> (shrinker_rwsem){++++..}, at: [<ffffffff810bbd02>]
> shrink_slab+0x32/0x1f0
> Nov 17 14:07:39 thoregon kernel: [66571.610915]  #2:
> (&type->s_umount_key#20){++++.+}, at: [<ffffffff810f5a7e>]
> grab_super_passive+0x3e/0x90
> Nov 17 14:07:39 thoregon kernel: [66571.610921]
> Nov 17 14:07:39 thoregon kernel: [66571.610921] the shortest
> dependencies between 2nd lock and 1st lock:
> Nov 17 14:07:39 thoregon kernel: [66571.610927]  ->
> (&(&ip->i_lock)->mr_lock/1){+.+.+.} ops: 169649 {
> Nov 17 14:07:39 thoregon kernel: [66571.610931]     HARDIRQ-ON-W at:
> Nov 17 14:07:39 thoregon kernel: [66571.610932]
> [<ffffffff8107f091>] __lock_acquire+0x631/0x1c00
> Nov 17 14:07:39 thoregon kernel: [66571.610935]
> [<ffffffff81080b55>] lock_acquire+0x55/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.610937]
> [<ffffffff8106126a>] down_write_nested+0x4a/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.610941]
> [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
> Nov 17 14:07:39 thoregon kernel: [66571.610944]
> [<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
> Nov 17 14:07:39 thoregon kernel: [66571.610946]
> [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.610948]
> [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.610950]
> [<ffffffff81100322>] vfs_create+0x72/0xc0
> Nov 17 14:07:39 thoregon kernel: [66571.610953]
> [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
> Nov 17 14:07:39 thoregon kernel: [66571.610955]
> [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
> Nov 17 14:07:39 thoregon kernel: [66571.610957]
> [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
> Nov 17 14:07:39 thoregon kernel: [66571.610959]
> [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
> Nov 17 14:07:39 thoregon kernel: [66571.610962]
> [<ffffffff810f222c>] sys_open+0x1c/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.610964]
> [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
> Nov 17 14:07:39 thoregon kernel: [66571.610967]     SOFTIRQ-ON-W at:
> Nov 17 14:07:39 thoregon kernel: [66571.610968]
> [<ffffffff8107f0c7>] __lock_acquire+0x667/0x1c00
> Nov 17 14:07:39 thoregon kernel: [66571.610970]
> [<ffffffff81080b55>] lock_acquire+0x55/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.610972]
> [<ffffffff8106126a>] down_write_nested+0x4a/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.610974]
> [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
> Nov 17 14:07:39 thoregon kernel: [66571.610976]
> [<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
> Nov 17 14:07:39 thoregon kernel: [66571.610977]
> [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.610979]
> [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.610981]
> [<ffffffff81100322>] vfs_create+0x72/0xc0
> Nov 17 14:07:39 thoregon kernel: [66571.610983]
> [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
> Nov 17 14:07:39 thoregon kernel: [66571.610985]
> [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
> Nov 17 14:07:39 thoregon kernel: [66571.610987]
> [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
> Nov 17 14:07:39 thoregon kernel: [66571.610989]
> [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
> Nov 17 14:07:39 thoregon kernel: [66571.610991]
> [<ffffffff810f222c>] sys_open+0x1c/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.610993]
> [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
> Nov 17 14:07:39 thoregon kernel: [66571.610995]     RECLAIM_FS-ON-W at:
> Nov 17 14:07:39 thoregon kernel: [66571.610996]
>   [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
> Nov 17 14:07:39 thoregon kernel: [66571.610998]
>   [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
> Nov 17 14:07:39 thoregon kernel: [66571.610999]
>   [<ffffffff810e9dc5>] kmem_cache_alloc+0x35/0xe0
> Nov 17 14:07:39 thoregon kernel: [66571.611002]
>   [<ffffffff810dba21>] vm_map_ram+0x271/0x770
> Nov 17 14:07:39 thoregon kernel: [66571.611004]
>   [<ffffffff811e12b6>] _xfs_buf_map_pages+0x46/0xe0
> Nov 17 14:07:39 thoregon kernel: [66571.611008]
>   [<ffffffff811e21ca>] xfs_buf_get_map+0x8a/0x130
> Nov 17 14:07:39 thoregon kernel: [66571.611009]
>   [<ffffffff81233989>] xfs_trans_get_buf_map+0xa9/0xd0
> Nov 17 14:07:39 thoregon kernel: [66571.611011]
>   [<ffffffff8121bc2d>] xfs_ialloc_inode_init+0xcd/0x1d0
> Nov 17 14:07:39 thoregon kernel: [66571.611015]
>   [<ffffffff8121c16f>] xfs_ialloc_ag_alloc+0x18f/0x500
> Nov 17 14:07:39 thoregon kernel: [66571.611017]
>   [<ffffffff8121d955>] xfs_dialloc+0x185/0x2a0
> Nov 17 14:07:39 thoregon kernel: [66571.611019]
>   [<ffffffff8121f068>] xfs_ialloc+0x58/0x650
> Nov 17 14:07:39 thoregon kernel: [66571.611021]
>   [<ffffffff811f3995>] xfs_dir_ialloc+0x65/0x270
> Nov 17 14:07:39 thoregon kernel: [66571.611023]
>   [<ffffffff811f537c>] xfs_create+0x3ac/0x5a0
> Nov 17 14:07:39 thoregon kernel: [66571.611024]
>   [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.611026]
>   [<ffffffff811ecb61>] xfs_vn_mkdir+0x11/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.611028]
>   [<ffffffff8110016f>] vfs_mkdir+0x7f/0xd0
> Nov 17 14:07:39 thoregon kernel: [66571.611030]
>   [<ffffffff81101b83>] sys_mkdirat+0x43/0x80
> Nov 17 14:07:39 thoregon kernel: [66571.611032]
>   [<ffffffff81101bd4>] sys_mkdir+0x14/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.611034]
>   [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
> Nov 17 14:07:39 thoregon kernel: [66571.611036]     INITIAL USE at:
> Nov 17 14:07:39 thoregon kernel: [66571.611037]
> [<ffffffff8107ed49>] __lock_acquire+0x2e9/0x1c00
> Nov 17 14:07:39 thoregon kernel: [66571.611038]
> [<ffffffff81080b55>] lock_acquire+0x55/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.611040]
> [<ffffffff8106126a>] down_write_nested+0x4a/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.611042]
> [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
> Nov 17 14:07:39 thoregon kernel: [66571.611044]
> [<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
> Nov 17 14:07:39 thoregon kernel: [66571.611046]
> [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.611047]
> [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.611049]
> [<ffffffff81100322>] vfs_create+0x72/0xc0
> Nov 17 14:07:39 thoregon kernel: [66571.611051]
> [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
> Nov 17 14:07:39 thoregon kernel: [66571.611053]
> [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
> Nov 17 14:07:39 thoregon kernel: [66571.611055]
> [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
> Nov 17 14:07:39 thoregon kernel: [66571.611057]
> [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
> Nov 17 14:07:39 thoregon kernel: [66571.611059]
> [<ffffffff810f222c>] sys_open+0x1c/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.611061]
> [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
> Nov 17 14:07:39 thoregon kernel: [66571.611063]   }
> Nov 17 14:07:39 thoregon kernel: [66571.611064]   ... key      at:
> [<ffffffff825b4b81>] __key.41357+0x1/0x8
> Nov 17 14:07:39 thoregon kernel: [66571.611066]   ... acquired at:
> Nov 17 14:07:39 thoregon kernel: [66571.611067]
> [<ffffffff81080b55>] lock_acquire+0x55/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.611069]
> [<ffffffff8106126a>] down_write_nested+0x4a/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.611071]
> [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
> Nov 17 14:07:39 thoregon kernel: [66571.611073]
> [<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
> Nov 17 14:07:39 thoregon kernel: [66571.611074]
> [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.611076]
> [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.611078]
> [<ffffffff81100322>] vfs_create+0x72/0xc0
> Nov 17 14:07:39 thoregon kernel: [66571.611080]
> [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
> Nov 17 14:07:39 thoregon kernel: [66571.611082]
> [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
> Nov 17 14:07:39 thoregon kernel: [66571.611084]
> [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
> Nov 17 14:07:39 thoregon kernel: [66571.611086]
> [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
> Nov 17 14:07:39 thoregon kernel: [66571.611088]
> [<ffffffff810f222c>] sys_open+0x1c/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.611090]
> [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
> Nov 17 14:07:39 thoregon kernel: [66571.611091]
> Nov 17 14:07:39 thoregon kernel: [66571.611092] ->
> (sb_internal){.+.+.?} ops: 1341531 {
> Nov 17 14:07:39 thoregon kernel: [66571.611095]    HARDIRQ-ON-R at:
> Nov 17 14:07:39 thoregon kernel: [66571.611096]
> [<ffffffff8107ef6a>] __lock_acquire+0x50a/0x1c00
> Nov 17 14:07:39 thoregon kernel: [66571.611098]
> [<ffffffff81080b55>] lock_acquire+0x55/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.611100]
> [<ffffffff810f451b>] __sb_start_write+0xab/0x190
> Nov 17 14:07:39 thoregon kernel: [66571.611102]
> [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611104]
> [<ffffffff811f5157>] xfs_create+0x187/0x5a0
> Nov 17 14:07:39 thoregon kernel: [66571.611105]
> [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.611107]
> [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.611109]
> [<ffffffff81100322>] vfs_create+0x72/0xc0
> Nov 17 14:07:39 thoregon kernel: [66571.611111]
> [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
> Nov 17 14:07:39 thoregon kernel: [66571.611113]
> [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
> Nov 17 14:07:39 thoregon kernel: [66571.611115]
> [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
> Nov 17 14:07:39 thoregon kernel: [66571.611117]
> [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
> Nov 17 14:07:39 thoregon kernel: [66571.611119]
> [<ffffffff810f222c>] sys_open+0x1c/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.611121]
> [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
> Nov 17 14:07:39 thoregon kernel: [66571.611123]    SOFTIRQ-ON-R at:
> Nov 17 14:07:39 thoregon kernel: [66571.611124]
> [<ffffffff8107f0c7>] __lock_acquire+0x667/0x1c00
> Nov 17 14:07:39 thoregon kernel: [66571.611126]
> [<ffffffff81080b55>] lock_acquire+0x55/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.611128]
> [<ffffffff810f451b>] __sb_start_write+0xab/0x190
> Nov 17 14:07:39 thoregon kernel: [66571.611130]
> [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611132]
> [<ffffffff811f5157>] xfs_create+0x187/0x5a0
> Nov 17 14:07:39 thoregon kernel: [66571.611133]
> [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.611135]
> [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.611137]
> [<ffffffff81100322>] vfs_create+0x72/0xc0
> Nov 17 14:07:39 thoregon kernel: [66571.611139]
> [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
> Nov 17 14:07:39 thoregon kernel: [66571.611141]
> [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
> Nov 17 14:07:39 thoregon kernel: [66571.611143]
> [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
> Nov 17 14:07:39 thoregon kernel: [66571.611145]
> [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
> Nov 17 14:07:39 thoregon kernel: [66571.611147]
> [<ffffffff810f222c>] sys_open+0x1c/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.611149]
> [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
> Nov 17 14:07:39 thoregon kernel: [66571.611151]    IN-RECLAIM_FS-R at:
> Nov 17 14:07:39 thoregon kernel: [66571.611152]
> [<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
> Nov 17 14:07:39 thoregon kernel: [66571.611154]
> [<ffffffff81080b55>] lock_acquire+0x55/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.611156]
> [<ffffffff810f451b>] __sb_start_write+0xab/0x190
> Nov 17 14:07:39 thoregon kernel: [66571.611158]
> [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611159]
> [<ffffffff811f3e64>] xfs_free_eofblocks+0x104/0x250
> Nov 17 14:07:39 thoregon kernel: [66571.611161]
> [<ffffffff811f4b39>] xfs_inactive+0xa9/0x480
> Nov 17 14:07:39 thoregon kernel: [66571.611163]
> [<ffffffff811efe10>] xfs_fs_evict_inode+0x70/0x80
> Nov 17 14:07:39 thoregon kernel: [66571.611165]
> [<ffffffff8110cb7f>] evict+0xaf/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.611168]
> [<ffffffff8110d209>] dispose_list+0x39/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611170]
> [<ffffffff8110dc63>] prune_icache_sb+0x183/0x340
> Nov 17 14:07:39 thoregon kernel: [66571.611172]
> [<ffffffff810f5bc3>] prune_super+0xf3/0x1a0
> Nov 17 14:07:39 thoregon kernel: [66571.611173]
> [<ffffffff810bbdee>] shrink_slab+0x11e/0x1f0
> Nov 17 14:07:39 thoregon kernel: [66571.611175]
> [<ffffffff810be98f>] try_to_free_pages+0x21f/0x4e0
> Nov 17 14:07:39 thoregon kernel: [66571.611177]
> [<ffffffff810b5ec6>] __alloc_pages_nodemask+0x506/0x800
> Nov 17 14:07:39 thoregon kernel: [66571.611179]
> [<ffffffff810ce56e>] handle_pte_fault+0x5ae/0x7a0
> Nov 17 14:07:39 thoregon kernel: [66571.611182]
> [<ffffffff810cf769>] handle_mm_fault+0x1f9/0x2a0
> Nov 17 14:07:39 thoregon kernel: [66571.611184]
> [<ffffffff81029dfc>] __do_page_fault+0x16c/0x480
> Nov 17 14:07:39 thoregon kernel: [66571.611186]
> [<ffffffff8102a139>] do_page_fault+0x9/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.611188]
> [<ffffffff816b287f>] page_fault+0x1f/0x30
> Nov 17 14:07:39 thoregon kernel: [66571.611190]    RECLAIM_FS-ON-R at:
> Nov 17 14:07:39 thoregon kernel: [66571.611191]
> [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
> Nov 17 14:07:39 thoregon kernel: [66571.611193]
> [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
> Nov 17 14:07:39 thoregon kernel: [66571.611195]
> [<ffffffff810e9dc5>] kmem_cache_alloc+0x35/0xe0
> Nov 17 14:07:39 thoregon kernel: [66571.611197]
> [<ffffffff811f6d4f>] kmem_zone_alloc+0x5f/0xe0
> Nov 17 14:07:39 thoregon kernel: [66571.611198]
> [<ffffffff811f6de8>] kmem_zone_zalloc+0x18/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611200]
> [<ffffffff8122b0b2>] _xfs_trans_alloc+0x32/0x90
> Nov 17 14:07:39 thoregon kernel: [66571.611202]
> [<ffffffff8122b148>] xfs_trans_alloc+0x38/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611204]
> [<ffffffff811f5157>] xfs_create+0x187/0x5a0
> Nov 17 14:07:39 thoregon kernel: [66571.611205]
> [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.611207]
> [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.611209]
> [<ffffffff81100322>] vfs_create+0x72/0xc0
> Nov 17 14:07:39 thoregon kernel: [66571.611211]
> [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
> Nov 17 14:07:39 thoregon kernel: [66571.611213]
> [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
> Nov 17 14:07:39 thoregon kernel: [66571.611215]
> [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
> Nov 17 14:07:39 thoregon kernel: [66571.611217]
> [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
> Nov 17 14:07:39 thoregon kernel: [66571.611219]
> [<ffffffff810f222c>] sys_open+0x1c/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.611221]
> [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
> Nov 17 14:07:39 thoregon kernel: [66571.611223]    INITIAL USE at:
> Nov 17 14:07:39 thoregon kernel: [66571.611224]
> [<ffffffff8107ed49>] __lock_acquire+0x2e9/0x1c00
> Nov 17 14:07:39 thoregon kernel: [66571.611225]
> [<ffffffff81080b55>] lock_acquire+0x55/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.611227]
> [<ffffffff810f451b>] __sb_start_write+0xab/0x190
> Nov 17 14:07:39 thoregon kernel: [66571.611229]
> [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611231]
> [<ffffffff811f5157>] xfs_create+0x187/0x5a0
> Nov 17 14:07:39 thoregon kernel: [66571.611232]
> [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.611234]
> [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.611236]
> [<ffffffff81100322>] vfs_create+0x72/0xc0
> Nov 17 14:07:39 thoregon kernel: [66571.611238]
> [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
> Nov 17 14:07:39 thoregon kernel: [66571.611240]
> [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
> Nov 17 14:07:39 thoregon kernel: [66571.611242]
> [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
> Nov 17 14:07:39 thoregon kernel: [66571.611244]
> [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
> Nov 17 14:07:39 thoregon kernel: [66571.611246]
> [<ffffffff810f222c>] sys_open+0x1c/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.611248]
> [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
> Nov 17 14:07:39 thoregon kernel: [66571.611250]  }
> Nov 17 14:07:39 thoregon kernel: [66571.611251]  ... key      at:
> [<ffffffff81c34e40>] xfs_fs_type+0x60/0x80
> Nov 17 14:07:39 thoregon kernel: [66571.611254]  ... acquired at:
> Nov 17 14:07:39 thoregon kernel: [66571.611254]
> [<ffffffff8107df3b>] check_usage_forwards+0x10b/0x140
> Nov 17 14:07:39 thoregon kernel: [66571.611256]
> [<ffffffff8107e900>] mark_lock+0x190/0x2f0
> Nov 17 14:07:39 thoregon kernel: [66571.611258]
> [<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
> Nov 17 14:07:39 thoregon kernel: [66571.611260]
> [<ffffffff81080b55>] lock_acquire+0x55/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.611261]
> [<ffffffff810f451b>] __sb_start_write+0xab/0x190
> Nov 17 14:07:39 thoregon kernel: [66571.611263]
> [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611265]
> [<ffffffff811f3e64>] xfs_free_eofblocks+0x104/0x250
> Nov 17 14:07:39 thoregon kernel: [66571.611266]
> [<ffffffff811f4b39>] xfs_inactive+0xa9/0x480
> Nov 17 14:07:39 thoregon kernel: [66571.611268]
> [<ffffffff811efe10>] xfs_fs_evict_inode+0x70/0x80
> Nov 17 14:07:39 thoregon kernel: [66571.611270]
> [<ffffffff8110cb7f>] evict+0xaf/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.611271]
> [<ffffffff8110d209>] dispose_list+0x39/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611273]
> [<ffffffff8110dc63>] prune_icache_sb+0x183/0x340
> Nov 17 14:07:39 thoregon kernel: [66571.611275]
> [<ffffffff810f5bc3>] prune_super+0xf3/0x1a0
> Nov 17 14:07:39 thoregon kernel: [66571.611277]
> [<ffffffff810bbdee>] shrink_slab+0x11e/0x1f0
> Nov 17 14:07:39 thoregon kernel: [66571.611278]
> [<ffffffff810be98f>] try_to_free_pages+0x21f/0x4e0
> Nov 17 14:07:39 thoregon kernel: [66571.611280]
> [<ffffffff810b5ec6>] __alloc_pages_nodemask+0x506/0x800
> Nov 17 14:07:39 thoregon kernel: [66571.611282]
> [<ffffffff810ce56e>] handle_pte_fault+0x5ae/0x7a0
> Nov 17 14:07:39 thoregon kernel: [66571.611284]
> [<ffffffff810cf769>] handle_mm_fault+0x1f9/0x2a0
> Nov 17 14:07:39 thoregon kernel: [66571.611286]
> [<ffffffff81029dfc>] __do_page_fault+0x16c/0x480
> Nov 17 14:07:39 thoregon kernel: [66571.611287]
> [<ffffffff8102a139>] do_page_fault+0x9/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.611289]
> [<ffffffff816b287f>] page_fault+0x1f/0x30
> Nov 17 14:07:39 thoregon kernel: [66571.611291]
> Nov 17 14:07:39 thoregon kernel: [66571.611292]
> Nov 17 14:07:39 thoregon kernel: [66571.611292] stack backtrace:
> Nov 17 14:07:39 thoregon kernel: [66571.611294] Pid: 21330, comm: cc1
> Not tainted 3.7.0-rc5 #1
> Nov 17 14:07:39 thoregon kernel: [66571.611295] Call Trace:
> Nov 17 14:07:39 thoregon kernel: [66571.611298]  [<ffffffff8107de28>]
> print_irq_inversion_bug.part.37+0x1e8/0x1f0
> Nov 17 14:07:39 thoregon kernel: [66571.611300]  [<ffffffff8107df3b>]
> check_usage_forwards+0x10b/0x140
> Nov 17 14:07:39 thoregon kernel: [66571.611303]  [<ffffffff8107e900>]
> mark_lock+0x190/0x2f0
> Nov 17 14:07:39 thoregon kernel: [66571.611306]  [<ffffffff8150406e>]
> ? dm_request+0x2e/0x2a0
> Nov 17 14:07:39 thoregon kernel: [66571.611308]  [<ffffffff8107de30>]
> ? print_irq_inversion_bug.part.37+0x1f0/0x1f0
> Nov 17 14:07:39 thoregon kernel: [66571.611310]  [<ffffffff8107efdf>]
> __lock_acquire+0x57f/0x1c00
> Nov 17 14:07:39 thoregon kernel: [66571.611313]  [<ffffffff812202f4>]
> ? xfs_iext_bno_to_ext+0x84/0x160
> Nov 17 14:07:39 thoregon kernel: [66571.611316]  [<ffffffff8120a023>]
> ? xfs_bmbt_get_all+0x13/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.611318]  [<ffffffff81200fb4>]
> ? xfs_bmap_search_multi_extents+0xa4/0x110
> Nov 17 14:07:39 thoregon kernel: [66571.611320]  [<ffffffff81080b55>]
> lock_acquire+0x55/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.611322]  [<ffffffff8122b138>]
> ? xfs_trans_alloc+0x28/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611324]  [<ffffffff810f451b>]
> __sb_start_write+0xab/0x190
> Nov 17 14:07:39 thoregon kernel: [66571.611326]  [<ffffffff8122b138>]
> ? xfs_trans_alloc+0x28/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611328]  [<ffffffff8122b138>]
> ? xfs_trans_alloc+0x28/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611330]  [<ffffffff8122b138>]
> xfs_trans_alloc+0x28/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611332]  [<ffffffff811f3e64>]
> xfs_free_eofblocks+0x104/0x250
> Nov 17 14:07:39 thoregon kernel: [66571.611334]  [<ffffffff816b204b>]
> ? _raw_spin_unlock_irq+0x2b/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611336]  [<ffffffff811f4b39>]
> xfs_inactive+0xa9/0x480
> Nov 17 14:07:39 thoregon kernel: [66571.611337]  [<ffffffff816b204b>]
> ? _raw_spin_unlock_irq+0x2b/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611340]  [<ffffffff811efe10>]
> xfs_fs_evict_inode+0x70/0x80
> Nov 17 14:07:39 thoregon kernel: [66571.611342]  [<ffffffff8110cb7f>]
> evict+0xaf/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.611344]  [<ffffffff8110d209>]
> dispose_list+0x39/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611346]  [<ffffffff8110dc63>]
> prune_icache_sb+0x183/0x340
> Nov 17 14:07:39 thoregon kernel: [66571.611347]  [<ffffffff810f5bc3>]
> prune_super+0xf3/0x1a0
> Nov 17 14:07:39 thoregon kernel: [66571.611349]  [<ffffffff810bbdee>]
> shrink_slab+0x11e/0x1f0
> Nov 17 14:07:39 thoregon kernel: [66571.611352]  [<ffffffff810be98f>]
> try_to_free_pages+0x21f/0x4e0
> Nov 17 14:07:39 thoregon kernel: [66571.611354]  [<ffffffff810b5ec6>]
> __alloc_pages_nodemask+0x506/0x800
> Nov 17 14:07:39 thoregon kernel: [66571.611356]  [<ffffffff810b9e40>]
> ? lru_deactivate_fn+0x1c0/0x1c0
> Nov 17 14:07:39 thoregon kernel: [66571.611358]  [<ffffffff810ce56e>]
> handle_pte_fault+0x5ae/0x7a0
> Nov 17 14:07:39 thoregon kernel: [66571.611360]  [<ffffffff810cf769>]
> handle_mm_fault+0x1f9/0x2a0
> Nov 17 14:07:39 thoregon kernel: [66571.611363]  [<ffffffff81029dfc>]
> __do_page_fault+0x16c/0x480
> Nov 17 14:07:39 thoregon kernel: [66571.611366]  [<ffffffff8129c7ad>]
> ? trace_hardirqs_off_thunk+0x3a/0x3c
> Nov 17 14:07:39 thoregon kernel: [66571.611368]  [<ffffffff8102a139>]
> do_page_fault+0x9/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.611370]  [<ffffffff816b287f>]
> page_fault+0x1f/0x30

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
@ 2012-11-18 15:29         ` Torsten Kaiser
  0 siblings, 0 replies; 31+ messages in thread
From: Torsten Kaiser @ 2012-11-18 15:29 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Linux Kernel, xfs

On Sun, Nov 18, 2012 at 11:24 AM, Torsten Kaiser
<just.for.lkml@googlemail.com> wrote:
> On Tue, Oct 30, 2012 at 9:37 PM, Torsten Kaiser
> <just.for.lkml@googlemail.com> wrote:
>> I will keep LOCKDEP enabled on that system, and if there really is
>> another splat, I will report back here. But I rather doubt that this
>> will be needed.
>
> After the patch, I did not see this problem again, but today I found
> another LOCKDEP report that also looks XFS related.
> I found it twice in the logs, and as both were slightly different, I
> will attach both versions.

> Nov  6 21:57:09 thoregon kernel: [ 9941.104353] 3.7.0-rc4 #1 Not tainted
> Nov  6 21:57:09 thoregon kernel: [ 9941.104355] inconsistent
> {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
> Nov  6 21:57:09 thoregon kernel: [ 9941.104430]        CPU0
> Nov  6 21:57:09 thoregon kernel: [ 9941.104431]        ----
> Nov  6 21:57:09 thoregon kernel: [ 9941.104432]   lock(&(&ip->i_lock)->mr_lock);
> Nov  6 21:57:09 thoregon kernel: [ 9941.104433]   <Interrupt>
> Nov  6 21:57:09 thoregon kernel: [ 9941.104434]
> lock(&(&ip->i_lock)->mr_lock);
> Nov  6 21:57:09 thoregon kernel: [ 9941.104435]
> Nov  6 21:57:09 thoregon kernel: [ 9941.104435]  *** DEADLOCK ***

Sorry! Copied the wrong report. Your fix only landed in -rc5, so my
vanilla -rc4 did (also) report the old problem again.
And I copy&pasted that report instead of the second appearance of the
new problem.

Here is the correct second report of the sb_internal vs
ip->i_lock->mr_lock problem:
[110926.972477]
[110926.972482] =========================================================
[110926.972484] [ INFO: possible irq lock inversion dependency detected ]
[110926.972486] 3.7.0-rc4 #1 Not tainted
[110926.972487] ---------------------------------------------------------
[110926.972489] kswapd0/725 just changed the state of lock:
[110926.972490]  (sb_internal){.+.+.?}, at: [<ffffffff8122b268>]
xfs_trans_alloc+0x28/0x50
[110926.972499] but this lock took another, RECLAIM_FS-unsafe lock in the past:
[110926.972500]  (&(&ip->i_lock)->mr_lock/1){+.+.+.}
[110926.972500]
[110926.972500] and interrupts could create inverse lock ordering between them.
[110926.972500]
[110926.972503]
[110926.972503] other info that might help us debug this:
[110926.972504]  Possible interrupt unsafe locking scenario:
[110926.972504]
[110926.972505]        CPU0                    CPU1
[110926.972506]        ----                    ----
[110926.972507]   lock(&(&ip->i_lock)->mr_lock/1);
[110926.972509]                                local_irq_disable();
[110926.972509]                                lock(sb_internal);
[110926.972511]                                lock(&(&ip->i_lock)->mr_lock/1);
[110926.972512]   <Interrupt>
[110926.972513]     lock(sb_internal);
[110926.972514]
[110926.972514]  *** DEADLOCK ***
[110926.972514]
[110926.972516] 2 locks held by kswapd0/725:
[110926.972517]  #0:  (shrinker_rwsem){++++..}, at:
[<ffffffff810bbd22>] shrink_slab+0x32/0x1f0
[110926.972522]  #1:  (&type->s_umount_key#20){++++.+}, at:
[<ffffffff810f5a8e>] grab_super_passive+0x3e/0x90
[110926.972527]
[110926.972527] the shortest dependencies between 2nd lock and 1st lock:
[110926.972533]  -> (&(&ip->i_lock)->mr_lock/1){+.+.+.} ops: 58117 {
[110926.972536]     HARDIRQ-ON-W at:
[110926.972537]                       [<ffffffff8107f091>]
__lock_acquire+0x631/0x1c00
[110926.972540]                       [<ffffffff81080b55>]
lock_acquire+0x55/0x70
[110926.972542]                       [<ffffffff8106126a>]
down_write_nested+0x4a/0x70
[110926.972545]                       [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[110926.972548]                       [<ffffffff811f5194>]
xfs_create+0x1d4/0x5a0
[110926.972550]                       [<ffffffff811eca1a>]
xfs_vn_mknod+0x8a/0x1b0
[110926.972552]                       [<ffffffff811ecb6e>]
xfs_vn_create+0xe/0x10
[110926.972554]                       [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972556]                       [<ffffffff81100b8e>]
do_last.isra.69+0x80e/0xc80
[110926.972558]                       [<ffffffff811010ab>]
path_openat.isra.70+0xab/0x490
[110926.972560]                       [<ffffffff8110184d>]
do_filp_open+0x3d/0xa0
[110926.972562]                       [<ffffffff810f2139>]
do_sys_open+0xf9/0x1e0
[110926.972565]                       [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972567]                       [<ffffffff816b2d12>]
system_call_fastpath+0x16/0x1b
[110926.972570]     SOFTIRQ-ON-W at:
[110926.972571]                       [<ffffffff8107f0c7>]
__lock_acquire+0x667/0x1c00
[110926.972573]                       [<ffffffff81080b55>]
lock_acquire+0x55/0x70
[110926.972574]                       [<ffffffff8106126a>]
down_write_nested+0x4a/0x70
[110926.972576]                       [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[110926.972578]                       [<ffffffff811f5194>]
xfs_create+0x1d4/0x5a0
[110926.972580]                       [<ffffffff811eca1a>]
xfs_vn_mknod+0x8a/0x1b0
[110926.972581]                       [<ffffffff811ecb6e>]
xfs_vn_create+0xe/0x10
[110926.972583]                       [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972585]                       [<ffffffff81100b8e>]
do_last.isra.69+0x80e/0xc80
[110926.972587]                       [<ffffffff811010ab>]
path_openat.isra.70+0xab/0x490
[110926.972589]                       [<ffffffff8110184d>]
do_filp_open+0x3d/0xa0
[110926.972591]                       [<ffffffff810f2139>]
do_sys_open+0xf9/0x1e0
[110926.972593]                       [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972595]                       [<ffffffff816b2d12>]
system_call_fastpath+0x16/0x1b
[110926.972597]     RECLAIM_FS-ON-W at:
[110926.972598]                          [<ffffffff8108137e>]
mark_held_locks+0x7e/0x130
[110926.972600]                          [<ffffffff81081a63>]
lockdep_trace_alloc+0x63/0xc0
[110926.972601]                          [<ffffffff810e9dd5>]
kmem_cache_alloc+0x35/0xe0
[110926.972603]                          [<ffffffff810dba31>]
vm_map_ram+0x271/0x770
[110926.972606]                          [<ffffffff811e1316>]
_xfs_buf_map_pages+0x46/0xe0
[110926.972609]                          [<ffffffff811e222a>]
xfs_buf_get_map+0x8a/0x130
[110926.972610]                          [<ffffffff81233ab9>]
xfs_trans_get_buf_map+0xa9/0xd0
[110926.972613]                          [<ffffffff8121bced>]
xfs_ialloc_inode_init+0xcd/0x1d0
[110926.972616]                          [<ffffffff8121c25e>]
xfs_ialloc_ag_alloc+0x1be/0x560
[110926.972618]                          [<ffffffff8121da65>]
xfs_dialloc+0x185/0x2a0
[110926.972619]                          [<ffffffff8121f198>]
xfs_ialloc+0x58/0x650
[110926.972621]                          [<ffffffff811f3985>]
xfs_dir_ialloc+0x65/0x270
[110926.972623]                          [<ffffffff811f536c>]
xfs_create+0x3ac/0x5a0
[110926.972624]                          [<ffffffff811eca1a>]
xfs_vn_mknod+0x8a/0x1b0
[110926.972626]                          [<ffffffff811ecb6e>]
xfs_vn_create+0xe/0x10
[110926.972628]                          [<ffffffff81100332>]
vfs_create+0x72/0xc0
[110926.972630]                          [<ffffffff81100b8e>]
do_last.isra.69+0x80e/0xc80
[110926.972632]                          [<ffffffff811010ab>]
path_openat.isra.70+0xab/0x490
[110926.972634]                          [<ffffffff8110184d>]
do_filp_open+0x3d/0xa0
[110926.972636]                          [<ffffffff810f2139>]
do_sys_open+0xf9/0x1e0
[110926.972638]                          [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972640]                          [<ffffffff816b2d12>]
system_call_fastpath+0x16/0x1b
[110926.972642]     INITIAL USE at:
[110926.972642]                      [<ffffffff8107ed49>]
__lock_acquire+0x2e9/0x1c00
[110926.972644]                      [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972646]                      [<ffffffff8106126a>]
down_write_nested+0x4a/0x70
[110926.972648]                      [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[110926.972650]                      [<ffffffff811f5194>] xfs_create+0x1d4/0x5a0
[110926.972651]                      [<ffffffff811eca1a>]
xfs_vn_mknod+0x8a/0x1b0
[110926.972653]                      [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972655]                      [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972657]                      [<ffffffff81100b8e>]
do_last.isra.69+0x80e/0xc80
[110926.972659]                      [<ffffffff811010ab>]
path_openat.isra.70+0xab/0x490
[110926.972661]                      [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972663]                      [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972664]                      [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972666]                      [<ffffffff816b2d12>]
system_call_fastpath+0x16/0x1b
[110926.972668]   }
[110926.972669]   ... key      at: [<ffffffff825b4b81>] __key.41355+0x1/0x8
[110926.972672]   ... acquired at:
[110926.972672]    [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972674]    [<ffffffff8106126a>] down_write_nested+0x4a/0x70
[110926.972676]    [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[110926.972678]    [<ffffffff811f5194>] xfs_create+0x1d4/0x5a0
[110926.972679]    [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972681]    [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972683]    [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972684]    [<ffffffff81100b8e>] do_last.isra.69+0x80e/0xc80
[110926.972686]    [<ffffffff811010ab>] path_openat.isra.70+0xab/0x490
[110926.972688]    [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972690]    [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972692]    [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972694]    [<ffffffff816b2d12>] system_call_fastpath+0x16/0x1b
[110926.972696]
[110926.972696] -> (sb_internal){.+.+.?} ops: 1710064 {
[110926.972699]    HARDIRQ-ON-R at:
[110926.972700]                     [<ffffffff8107ef6a>]
__lock_acquire+0x50a/0x1c00
[110926.972702]                     [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972704]                     [<ffffffff810f452b>]
__sb_start_write+0xab/0x190
[110926.972705]                     [<ffffffff8122b268>]
xfs_trans_alloc+0x28/0x50
[110926.972707]                     [<ffffffff811f5147>] xfs_create+0x187/0x5a0
[110926.972709]                     [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972711]                     [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972712]                     [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972714]                     [<ffffffff81100b8e>]
do_last.isra.69+0x80e/0xc80
[110926.972716]                     [<ffffffff811010ab>]
path_openat.isra.70+0xab/0x490
[110926.972718]                     [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972720]                     [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972722]                     [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972724]                     [<ffffffff816b2d12>]
system_call_fastpath+0x16/0x1b
[110926.972726]    SOFTIRQ-ON-R at:
[110926.972727]                     [<ffffffff8107f0c7>]
__lock_acquire+0x667/0x1c00
[110926.972728]                     [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972730]                     [<ffffffff810f452b>]
__sb_start_write+0xab/0x190
[110926.972732]                     [<ffffffff8122b268>]
xfs_trans_alloc+0x28/0x50
[110926.972734]                     [<ffffffff811f5147>] xfs_create+0x187/0x5a0
[110926.972735]                     [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972737]                     [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972739]                     [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972741]                     [<ffffffff81100b8e>]
do_last.isra.69+0x80e/0xc80
[110926.972743]                     [<ffffffff811010ab>]
path_openat.isra.70+0xab/0x490
[110926.972745]                     [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972747]                     [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972749]                     [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972750]                     [<ffffffff816b2d12>]
system_call_fastpath+0x16/0x1b
[110926.972752]    IN-RECLAIM_FS-R at:
[110926.972753]                        [<ffffffff8107efdf>]
__lock_acquire+0x57f/0x1c00
[110926.972755]                        [<ffffffff81080b55>]
lock_acquire+0x55/0x70
[110926.972757]                        [<ffffffff810f452b>]
__sb_start_write+0xab/0x190
[110926.972758]                        [<ffffffff8122b268>]
xfs_trans_alloc+0x28/0x50
[110926.972760]                        [<ffffffff811f3e54>]
xfs_free_eofblocks+0x104/0x250
[110926.972762]                        [<ffffffff811f4b29>]
xfs_inactive+0xa9/0x480
[110926.972763]                        [<ffffffff811efe00>]
xfs_fs_evict_inode+0x70/0x80
[110926.972765]                        [<ffffffff8110cb8f>] evict+0xaf/0x1b0
[110926.972768]                        [<ffffffff8110d219>]
dispose_list+0x39/0x50
[110926.972770]                        [<ffffffff8110dc73>]
prune_icache_sb+0x183/0x340
[110926.972772]                        [<ffffffff810f5bd3>]
prune_super+0xf3/0x1a0
[110926.972773]                        [<ffffffff810bbe0e>]
shrink_slab+0x11e/0x1f0
[110926.972775]                        [<ffffffff810be400>] kswapd+0x690/0xa10
[110926.972777]                        [<ffffffff8105c246>] kthread+0xd6/0xe0
[110926.972779]                        [<ffffffff816b2c6c>]
ret_from_fork+0x7c/0xb0
[110926.972781]    RECLAIM_FS-ON-R at:
[110926.972782]                        [<ffffffff8108137e>]
mark_held_locks+0x7e/0x130
[110926.972784]                        [<ffffffff81081a63>]
lockdep_trace_alloc+0x63/0xc0
[110926.972785]                        [<ffffffff810e9dd5>]
kmem_cache_alloc+0x35/0xe0
[110926.972787]                        [<ffffffff811f6d3f>]
kmem_zone_alloc+0x5f/0xe0
[110926.972789]                        [<ffffffff811f6dd8>]
kmem_zone_zalloc+0x18/0x50
[110926.972790]                        [<ffffffff8122b1e2>]
_xfs_trans_alloc+0x32/0x90
[110926.972792]                        [<ffffffff8122b278>]
xfs_trans_alloc+0x38/0x50
[110926.972794]                        [<ffffffff811f5147>]
xfs_create+0x187/0x5a0
[110926.972796]                        [<ffffffff811eca1a>]
xfs_vn_mknod+0x8a/0x1b0
[110926.972797]                        [<ffffffff811ecb6e>]
xfs_vn_create+0xe/0x10
[110926.972799]                        [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972801]                        [<ffffffff81100b8e>]
do_last.isra.69+0x80e/0xc80
[110926.972803]                        [<ffffffff811010ab>]
path_openat.isra.70+0xab/0x490
[110926.972805]                        [<ffffffff8110184d>]
do_filp_open+0x3d/0xa0
[110926.972807]                        [<ffffffff810f2139>]
do_sys_open+0xf9/0x1e0
[110926.972809]                        [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972811]                        [<ffffffff816b2d12>]
system_call_fastpath+0x16/0x1b
[110926.972813]    INITIAL USE at:
[110926.972814]                    [<ffffffff8107ed49>]
__lock_acquire+0x2e9/0x1c00
[110926.972815]                    [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972817]                    [<ffffffff810f452b>]
__sb_start_write+0xab/0x190
[110926.972819]                    [<ffffffff8122b268>]
xfs_trans_alloc+0x28/0x50
[110926.972820]                    [<ffffffff811f5147>] xfs_create+0x187/0x5a0
[110926.972822]                    [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972824]                    [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972826]                    [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972827]                    [<ffffffff81100b8e>]
do_last.isra.69+0x80e/0xc80
[110926.972829]                    [<ffffffff811010ab>]
path_openat.isra.70+0xab/0x490
[110926.972831]                    [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972833]                    [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972835]                    [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972837]                    [<ffffffff816b2d12>]
system_call_fastpath+0x16/0x1b
[110926.972839]  }
[110926.972840]  ... key      at: [<ffffffff81c34e40>] xfs_fs_type+0x60/0x80
[110926.972842]  ... acquired at:
[110926.972843]    [<ffffffff8107df3b>] check_usage_forwards+0x10b/0x140
[110926.972845]    [<ffffffff8107e900>] mark_lock+0x190/0x2f0
[110926.972846]    [<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
[110926.972848]    [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972850]    [<ffffffff810f452b>] __sb_start_write+0xab/0x190
[110926.972851]    [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
[110926.972853]    [<ffffffff811f3e54>] xfs_free_eofblocks+0x104/0x250
[110926.972855]    [<ffffffff811f4b29>] xfs_inactive+0xa9/0x480
[110926.972856]    [<ffffffff811efe00>] xfs_fs_evict_inode+0x70/0x80
[110926.972858]    [<ffffffff8110cb8f>] evict+0xaf/0x1b0
[110926.972860]    [<ffffffff8110d219>] dispose_list+0x39/0x50
[110926.972861]    [<ffffffff8110dc73>] prune_icache_sb+0x183/0x340
[110926.972863]    [<ffffffff810f5bd3>] prune_super+0xf3/0x1a0
[110926.972865]    [<ffffffff810bbe0e>] shrink_slab+0x11e/0x1f0
[110926.972866]    [<ffffffff810be400>] kswapd+0x690/0xa10
[110926.972868]    [<ffffffff8105c246>] kthread+0xd6/0xe0
[110926.972870]    [<ffffffff816b2c6c>] ret_from_fork+0x7c/0xb0
[110926.972871]
[110926.972872]
[110926.972872] stack backtrace:
[110926.972874] Pid: 725, comm: kswapd0 Not tainted 3.7.0-rc4 #1
[110926.972875] Call Trace:
[110926.972878]  [<ffffffff8107de28>]
print_irq_inversion_bug.part.37+0x1e8/0x1f0
[110926.972880]  [<ffffffff8107df3b>] check_usage_forwards+0x10b/0x140
[110926.972883]  [<ffffffff8107e900>] mark_lock+0x190/0x2f0
[110926.972885]  [<ffffffff8107de30>] ?
print_irq_inversion_bug.part.37+0x1f0/0x1f0
[110926.972887]  [<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
[110926.972889]  [<ffffffff81220424>] ? xfs_iext_bno_to_ext+0x84/0x160
[110926.972892]  [<ffffffff8120a0e3>] ? xfs_bmbt_get_all+0x13/0x20
[110926.972895]  [<ffffffff81201104>] ? xfs_bmap_search_multi_extents+0xa4/0x110
[110926.972897]  [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972899]  [<ffffffff8122b268>] ? xfs_trans_alloc+0x28/0x50
[110926.972901]  [<ffffffff810f452b>] __sb_start_write+0xab/0x190
[110926.972903]  [<ffffffff8122b268>] ? xfs_trans_alloc+0x28/0x50
[110926.972905]  [<ffffffff8122b268>] ? xfs_trans_alloc+0x28/0x50
[110926.972907]  [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
[110926.972908]  [<ffffffff811f3e54>] xfs_free_eofblocks+0x104/0x250
[110926.972910]  [<ffffffff816b1efb>] ? _raw_spin_unlock_irq+0x2b/0x50
[110926.972912]  [<ffffffff811f4b29>] xfs_inactive+0xa9/0x480
[110926.972914]  [<ffffffff816b1efb>] ? _raw_spin_unlock_irq+0x2b/0x50
[110926.972916]  [<ffffffff811efe00>] xfs_fs_evict_inode+0x70/0x80
[110926.972918]  [<ffffffff8110cb8f>] evict+0xaf/0x1b0
[110926.972920]  [<ffffffff8110d219>] dispose_list+0x39/0x50
[110926.972922]  [<ffffffff8110dc73>] prune_icache_sb+0x183/0x340
[110926.972924]  [<ffffffff810f5bd3>] prune_super+0xf3/0x1a0
[110926.972926]  [<ffffffff810bbe0e>] shrink_slab+0x11e/0x1f0
[110926.972928]  [<ffffffff810be400>] kswapd+0x690/0xa10
[110926.972930]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
[110926.972932]  [<ffffffff810bdd70>] ? shrink_lruvec+0x540/0x540
[110926.972934]  [<ffffffff8105c246>] kthread+0xd6/0xe0
[110926.972936]  [<ffffffff816b1efb>] ? _raw_spin_unlock_irq+0x2b/0x50
[110926.972938]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
[110926.972940]  [<ffffffff816b2c6c>] ret_from_fork+0x7c/0xb0
[110926.972942]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0

> Nov 17 14:07:38 thoregon kernel: [66571.610863]
> Nov 17 14:07:38 thoregon kernel: [66571.610869]
> =========================================================
> Nov 17 14:07:38 thoregon kernel: [66571.610870] [ INFO: possible irq
> lock inversion dependency detected ]
> Nov 17 14:07:38 thoregon kernel: [66571.610873] 3.7.0-rc5 #1 Not tainted
> Nov 17 14:07:38 thoregon kernel: [66571.610874]
> ---------------------------------------------------------
> Nov 17 14:07:38 thoregon kernel: [66571.610875] cc1/21330 just changed
> the state of lock:
> Nov 17 14:07:38 thoregon kernel: [66571.610877]
> (sb_internal){.+.+.?}, at: [<ffffffff8122b138>]
> xfs_trans_alloc+0x28/0x50
> Nov 17 14:07:38 thoregon kernel: [66571.610885] but this lock took
> another, RECLAIM_FS-unsafe lock in the past:
> Nov 17 14:07:38 thoregon kernel: [66571.610886]
> (&(&ip->i_lock)->mr_lock/1){+.+.+.}
> Nov 17 14:07:38 thoregon kernel: [66571.610886]
> Nov 17 14:07:39 thoregon kernel: [66571.610886] and interrupts could
> create inverse lock ordering between them.
> Nov 17 14:07:39 thoregon kernel: [66571.610886]
> Nov 17 14:07:39 thoregon kernel: [66571.610890]
> Nov 17 14:07:39 thoregon kernel: [66571.610890] other info that might
> help us debug this:
> Nov 17 14:07:39 thoregon kernel: [66571.610891]  Possible interrupt
> unsafe locking scenario:
> Nov 17 14:07:39 thoregon kernel: [66571.610891]
> Nov 17 14:07:39 thoregon kernel: [66571.610892]        CPU0
>         CPU1
> Nov 17 14:07:39 thoregon kernel: [66571.610893]        ----
>         ----
> Nov 17 14:07:39 thoregon kernel: [66571.610894]
> lock(&(&ip->i_lock)->mr_lock/1);
> Nov 17 14:07:39 thoregon kernel: [66571.610896]
>         local_irq_disable();
> Nov 17 14:07:39 thoregon kernel: [66571.610897]
>         lock(sb_internal);
> Nov 17 14:07:39 thoregon kernel: [66571.610898]
>         lock(&(&ip->i_lock)->mr_lock/1);
> Nov 17 14:07:39 thoregon kernel: [66571.610900]   <Interrupt>
> Nov 17 14:07:39 thoregon kernel: [66571.610901]     lock(sb_internal);
> Nov 17 14:07:39 thoregon kernel: [66571.610902]
> Nov 17 14:07:39 thoregon kernel: [66571.610902]  *** DEADLOCK ***
> Nov 17 14:07:39 thoregon kernel: [66571.610902]
> Nov 17 14:07:39 thoregon kernel: [66571.610904] 3 locks held by cc1/21330:
> Nov 17 14:07:39 thoregon kernel: [66571.610905]  #0:
> (&mm->mmap_sem){++++++}, at: [<ffffffff81029d8b>]
> __do_page_fault+0xfb/0x480
> Nov 17 14:07:39 thoregon kernel: [66571.610910]  #1:
> (shrinker_rwsem){++++..}, at: [<ffffffff810bbd02>]
> shrink_slab+0x32/0x1f0
> Nov 17 14:07:39 thoregon kernel: [66571.610915]  #2:
> (&type->s_umount_key#20){++++.+}, at: [<ffffffff810f5a7e>]
> grab_super_passive+0x3e/0x90
> Nov 17 14:07:39 thoregon kernel: [66571.610921]
> Nov 17 14:07:39 thoregon kernel: [66571.610921] the shortest
> dependencies between 2nd lock and 1st lock:
> Nov 17 14:07:39 thoregon kernel: [66571.610927]  ->
> (&(&ip->i_lock)->mr_lock/1){+.+.+.} ops: 169649 {
> Nov 17 14:07:39 thoregon kernel: [66571.610931]     HARDIRQ-ON-W at:
> Nov 17 14:07:39 thoregon kernel: [66571.610932]
> [<ffffffff8107f091>] __lock_acquire+0x631/0x1c00
> Nov 17 14:07:39 thoregon kernel: [66571.610935]
> [<ffffffff81080b55>] lock_acquire+0x55/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.610937]
> [<ffffffff8106126a>] down_write_nested+0x4a/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.610941]
> [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
> Nov 17 14:07:39 thoregon kernel: [66571.610944]
> [<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
> Nov 17 14:07:39 thoregon kernel: [66571.610946]
> [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.610948]
> [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.610950]
> [<ffffffff81100322>] vfs_create+0x72/0xc0
> Nov 17 14:07:39 thoregon kernel: [66571.610953]
> [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
> Nov 17 14:07:39 thoregon kernel: [66571.610955]
> [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
> Nov 17 14:07:39 thoregon kernel: [66571.610957]
> [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
> Nov 17 14:07:39 thoregon kernel: [66571.610959]
> [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
> Nov 17 14:07:39 thoregon kernel: [66571.610962]
> [<ffffffff810f222c>] sys_open+0x1c/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.610964]
> [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
> Nov 17 14:07:39 thoregon kernel: [66571.610967]     SOFTIRQ-ON-W at:
> Nov 17 14:07:39 thoregon kernel: [66571.610968]
> [<ffffffff8107f0c7>] __lock_acquire+0x667/0x1c00
> Nov 17 14:07:39 thoregon kernel: [66571.610970]
> [<ffffffff81080b55>] lock_acquire+0x55/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.610972]
> [<ffffffff8106126a>] down_write_nested+0x4a/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.610974]
> [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
> Nov 17 14:07:39 thoregon kernel: [66571.610976]
> [<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
> Nov 17 14:07:39 thoregon kernel: [66571.610977]
> [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.610979]
> [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.610981]
> [<ffffffff81100322>] vfs_create+0x72/0xc0
> Nov 17 14:07:39 thoregon kernel: [66571.610983]
> [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
> Nov 17 14:07:39 thoregon kernel: [66571.610985]
> [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
> Nov 17 14:07:39 thoregon kernel: [66571.610987]
> [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
> Nov 17 14:07:39 thoregon kernel: [66571.610989]
> [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
> Nov 17 14:07:39 thoregon kernel: [66571.610991]
> [<ffffffff810f222c>] sys_open+0x1c/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.610993]
> [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
> Nov 17 14:07:39 thoregon kernel: [66571.610995]     RECLAIM_FS-ON-W at:
> Nov 17 14:07:39 thoregon kernel: [66571.610996]
>   [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
> Nov 17 14:07:39 thoregon kernel: [66571.610998]
>   [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
> Nov 17 14:07:39 thoregon kernel: [66571.610999]
>   [<ffffffff810e9dc5>] kmem_cache_alloc+0x35/0xe0
> Nov 17 14:07:39 thoregon kernel: [66571.611002]
>   [<ffffffff810dba21>] vm_map_ram+0x271/0x770
> Nov 17 14:07:39 thoregon kernel: [66571.611004]
>   [<ffffffff811e12b6>] _xfs_buf_map_pages+0x46/0xe0
> Nov 17 14:07:39 thoregon kernel: [66571.611008]
>   [<ffffffff811e21ca>] xfs_buf_get_map+0x8a/0x130
> Nov 17 14:07:39 thoregon kernel: [66571.611009]
>   [<ffffffff81233989>] xfs_trans_get_buf_map+0xa9/0xd0
> Nov 17 14:07:39 thoregon kernel: [66571.611011]
>   [<ffffffff8121bc2d>] xfs_ialloc_inode_init+0xcd/0x1d0
> Nov 17 14:07:39 thoregon kernel: [66571.611015]
>   [<ffffffff8121c16f>] xfs_ialloc_ag_alloc+0x18f/0x500
> Nov 17 14:07:39 thoregon kernel: [66571.611017]
>   [<ffffffff8121d955>] xfs_dialloc+0x185/0x2a0
> Nov 17 14:07:39 thoregon kernel: [66571.611019]
>   [<ffffffff8121f068>] xfs_ialloc+0x58/0x650
> Nov 17 14:07:39 thoregon kernel: [66571.611021]
>   [<ffffffff811f3995>] xfs_dir_ialloc+0x65/0x270
> Nov 17 14:07:39 thoregon kernel: [66571.611023]
>   [<ffffffff811f537c>] xfs_create+0x3ac/0x5a0
> Nov 17 14:07:39 thoregon kernel: [66571.611024]
>   [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.611026]
>   [<ffffffff811ecb61>] xfs_vn_mkdir+0x11/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.611028]
>   [<ffffffff8110016f>] vfs_mkdir+0x7f/0xd0
> Nov 17 14:07:39 thoregon kernel: [66571.611030]
>   [<ffffffff81101b83>] sys_mkdirat+0x43/0x80
> Nov 17 14:07:39 thoregon kernel: [66571.611032]
>   [<ffffffff81101bd4>] sys_mkdir+0x14/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.611034]
>   [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
> Nov 17 14:07:39 thoregon kernel: [66571.611036]     INITIAL USE at:
> Nov 17 14:07:39 thoregon kernel: [66571.611037]
> [<ffffffff8107ed49>] __lock_acquire+0x2e9/0x1c00
> Nov 17 14:07:39 thoregon kernel: [66571.611038]
> [<ffffffff81080b55>] lock_acquire+0x55/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.611040]
> [<ffffffff8106126a>] down_write_nested+0x4a/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.611042]
> [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
> Nov 17 14:07:39 thoregon kernel: [66571.611044]
> [<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
> Nov 17 14:07:39 thoregon kernel: [66571.611046]
> [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.611047]
> [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.611049]
> [<ffffffff81100322>] vfs_create+0x72/0xc0
> Nov 17 14:07:39 thoregon kernel: [66571.611051]
> [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
> Nov 17 14:07:39 thoregon kernel: [66571.611053]
> [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
> Nov 17 14:07:39 thoregon kernel: [66571.611055]
> [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
> Nov 17 14:07:39 thoregon kernel: [66571.611057]
> [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
> Nov 17 14:07:39 thoregon kernel: [66571.611059]
> [<ffffffff810f222c>] sys_open+0x1c/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.611061]
> [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
> Nov 17 14:07:39 thoregon kernel: [66571.611063]   }
> Nov 17 14:07:39 thoregon kernel: [66571.611064]   ... key      at:
> [<ffffffff825b4b81>] __key.41357+0x1/0x8
> Nov 17 14:07:39 thoregon kernel: [66571.611066]   ... acquired at:
> Nov 17 14:07:39 thoregon kernel: [66571.611067]
> [<ffffffff81080b55>] lock_acquire+0x55/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.611069]
> [<ffffffff8106126a>] down_write_nested+0x4a/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.611071]
> [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
> Nov 17 14:07:39 thoregon kernel: [66571.611073]
> [<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
> Nov 17 14:07:39 thoregon kernel: [66571.611074]
> [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.611076]
> [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.611078]
> [<ffffffff81100322>] vfs_create+0x72/0xc0
> Nov 17 14:07:39 thoregon kernel: [66571.611080]
> [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
> Nov 17 14:07:39 thoregon kernel: [66571.611082]
> [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
> Nov 17 14:07:39 thoregon kernel: [66571.611084]
> [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
> Nov 17 14:07:39 thoregon kernel: [66571.611086]
> [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
> Nov 17 14:07:39 thoregon kernel: [66571.611088]
> [<ffffffff810f222c>] sys_open+0x1c/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.611090]
> [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
> Nov 17 14:07:39 thoregon kernel: [66571.611091]
> Nov 17 14:07:39 thoregon kernel: [66571.611092] ->
> (sb_internal){.+.+.?} ops: 1341531 {
> Nov 17 14:07:39 thoregon kernel: [66571.611095]    HARDIRQ-ON-R at:
> Nov 17 14:07:39 thoregon kernel: [66571.611096]
> [<ffffffff8107ef6a>] __lock_acquire+0x50a/0x1c00
> Nov 17 14:07:39 thoregon kernel: [66571.611098]
> [<ffffffff81080b55>] lock_acquire+0x55/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.611100]
> [<ffffffff810f451b>] __sb_start_write+0xab/0x190
> Nov 17 14:07:39 thoregon kernel: [66571.611102]
> [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611104]
> [<ffffffff811f5157>] xfs_create+0x187/0x5a0
> Nov 17 14:07:39 thoregon kernel: [66571.611105]
> [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.611107]
> [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.611109]
> [<ffffffff81100322>] vfs_create+0x72/0xc0
> Nov 17 14:07:39 thoregon kernel: [66571.611111]
> [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
> Nov 17 14:07:39 thoregon kernel: [66571.611113]
> [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
> Nov 17 14:07:39 thoregon kernel: [66571.611115]
> [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
> Nov 17 14:07:39 thoregon kernel: [66571.611117]
> [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
> Nov 17 14:07:39 thoregon kernel: [66571.611119]
> [<ffffffff810f222c>] sys_open+0x1c/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.611121]
> [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
> Nov 17 14:07:39 thoregon kernel: [66571.611123]    SOFTIRQ-ON-R at:
> Nov 17 14:07:39 thoregon kernel: [66571.611124]
> [<ffffffff8107f0c7>] __lock_acquire+0x667/0x1c00
> Nov 17 14:07:39 thoregon kernel: [66571.611126]
> [<ffffffff81080b55>] lock_acquire+0x55/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.611128]
> [<ffffffff810f451b>] __sb_start_write+0xab/0x190
> Nov 17 14:07:39 thoregon kernel: [66571.611130]
> [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611132]
> [<ffffffff811f5157>] xfs_create+0x187/0x5a0
> Nov 17 14:07:39 thoregon kernel: [66571.611133]
> [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.611135]
> [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.611137]
> [<ffffffff81100322>] vfs_create+0x72/0xc0
> Nov 17 14:07:39 thoregon kernel: [66571.611139]
> [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
> Nov 17 14:07:39 thoregon kernel: [66571.611141]
> [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
> Nov 17 14:07:39 thoregon kernel: [66571.611143]
> [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
> Nov 17 14:07:39 thoregon kernel: [66571.611145]
> [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
> Nov 17 14:07:39 thoregon kernel: [66571.611147]
> [<ffffffff810f222c>] sys_open+0x1c/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.611149]
> [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
> Nov 17 14:07:39 thoregon kernel: [66571.611151]    IN-RECLAIM_FS-R at:
> Nov 17 14:07:39 thoregon kernel: [66571.611152]
> [<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
> Nov 17 14:07:39 thoregon kernel: [66571.611154]
> [<ffffffff81080b55>] lock_acquire+0x55/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.611156]
> [<ffffffff810f451b>] __sb_start_write+0xab/0x190
> Nov 17 14:07:39 thoregon kernel: [66571.611158]
> [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611159]
> [<ffffffff811f3e64>] xfs_free_eofblocks+0x104/0x250
> Nov 17 14:07:39 thoregon kernel: [66571.611161]
> [<ffffffff811f4b39>] xfs_inactive+0xa9/0x480
> Nov 17 14:07:39 thoregon kernel: [66571.611163]
> [<ffffffff811efe10>] xfs_fs_evict_inode+0x70/0x80
> Nov 17 14:07:39 thoregon kernel: [66571.611165]
> [<ffffffff8110cb7f>] evict+0xaf/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.611168]
> [<ffffffff8110d209>] dispose_list+0x39/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611170]
> [<ffffffff8110dc63>] prune_icache_sb+0x183/0x340
> Nov 17 14:07:39 thoregon kernel: [66571.611172]
> [<ffffffff810f5bc3>] prune_super+0xf3/0x1a0
> Nov 17 14:07:39 thoregon kernel: [66571.611173]
> [<ffffffff810bbdee>] shrink_slab+0x11e/0x1f0
> Nov 17 14:07:39 thoregon kernel: [66571.611175]
> [<ffffffff810be98f>] try_to_free_pages+0x21f/0x4e0
> Nov 17 14:07:39 thoregon kernel: [66571.611177]
> [<ffffffff810b5ec6>] __alloc_pages_nodemask+0x506/0x800
> Nov 17 14:07:39 thoregon kernel: [66571.611179]
> [<ffffffff810ce56e>] handle_pte_fault+0x5ae/0x7a0
> Nov 17 14:07:39 thoregon kernel: [66571.611182]
> [<ffffffff810cf769>] handle_mm_fault+0x1f9/0x2a0
> Nov 17 14:07:39 thoregon kernel: [66571.611184]
> [<ffffffff81029dfc>] __do_page_fault+0x16c/0x480
> Nov 17 14:07:39 thoregon kernel: [66571.611186]
> [<ffffffff8102a139>] do_page_fault+0x9/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.611188]
> [<ffffffff816b287f>] page_fault+0x1f/0x30
> Nov 17 14:07:39 thoregon kernel: [66571.611190]    RECLAIM_FS-ON-R at:
> Nov 17 14:07:39 thoregon kernel: [66571.611191]
> [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
> Nov 17 14:07:39 thoregon kernel: [66571.611193]
> [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
> Nov 17 14:07:39 thoregon kernel: [66571.611195]
> [<ffffffff810e9dc5>] kmem_cache_alloc+0x35/0xe0
> Nov 17 14:07:39 thoregon kernel: [66571.611197]
> [<ffffffff811f6d4f>] kmem_zone_alloc+0x5f/0xe0
> Nov 17 14:07:39 thoregon kernel: [66571.611198]
> [<ffffffff811f6de8>] kmem_zone_zalloc+0x18/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611200]
> [<ffffffff8122b0b2>] _xfs_trans_alloc+0x32/0x90
> Nov 17 14:07:39 thoregon kernel: [66571.611202]
> [<ffffffff8122b148>] xfs_trans_alloc+0x38/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611204]
> [<ffffffff811f5157>] xfs_create+0x187/0x5a0
> Nov 17 14:07:39 thoregon kernel: [66571.611205]
> [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.611207]
> [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.611209]
> [<ffffffff81100322>] vfs_create+0x72/0xc0
> Nov 17 14:07:39 thoregon kernel: [66571.611211]
> [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
> Nov 17 14:07:39 thoregon kernel: [66571.611213]
> [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
> Nov 17 14:07:39 thoregon kernel: [66571.611215]
> [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
> Nov 17 14:07:39 thoregon kernel: [66571.611217]
> [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
> Nov 17 14:07:39 thoregon kernel: [66571.611219]
> [<ffffffff810f222c>] sys_open+0x1c/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.611221]
> [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
> Nov 17 14:07:39 thoregon kernel: [66571.611223]    INITIAL USE at:
> Nov 17 14:07:39 thoregon kernel: [66571.611224]
> [<ffffffff8107ed49>] __lock_acquire+0x2e9/0x1c00
> Nov 17 14:07:39 thoregon kernel: [66571.611225]
> [<ffffffff81080b55>] lock_acquire+0x55/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.611227]
> [<ffffffff810f451b>] __sb_start_write+0xab/0x190
> Nov 17 14:07:39 thoregon kernel: [66571.611229]
> [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611231]
> [<ffffffff811f5157>] xfs_create+0x187/0x5a0
> Nov 17 14:07:39 thoregon kernel: [66571.611232]
> [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.611234]
> [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.611236]
> [<ffffffff81100322>] vfs_create+0x72/0xc0
> Nov 17 14:07:39 thoregon kernel: [66571.611238]
> [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
> Nov 17 14:07:39 thoregon kernel: [66571.611240]
> [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
> Nov 17 14:07:39 thoregon kernel: [66571.611242]
> [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
> Nov 17 14:07:39 thoregon kernel: [66571.611244]
> [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
> Nov 17 14:07:39 thoregon kernel: [66571.611246]
> [<ffffffff810f222c>] sys_open+0x1c/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.611248]
> [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
> Nov 17 14:07:39 thoregon kernel: [66571.611250]  }
> Nov 17 14:07:39 thoregon kernel: [66571.611251]  ... key      at:
> [<ffffffff81c34e40>] xfs_fs_type+0x60/0x80
> Nov 17 14:07:39 thoregon kernel: [66571.611254]  ... acquired at:
> Nov 17 14:07:39 thoregon kernel: [66571.611254]
> [<ffffffff8107df3b>] check_usage_forwards+0x10b/0x140
> Nov 17 14:07:39 thoregon kernel: [66571.611256]
> [<ffffffff8107e900>] mark_lock+0x190/0x2f0
> Nov 17 14:07:39 thoregon kernel: [66571.611258]
> [<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
> Nov 17 14:07:39 thoregon kernel: [66571.611260]
> [<ffffffff81080b55>] lock_acquire+0x55/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.611261]
> [<ffffffff810f451b>] __sb_start_write+0xab/0x190
> Nov 17 14:07:39 thoregon kernel: [66571.611263]
> [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611265]
> [<ffffffff811f3e64>] xfs_free_eofblocks+0x104/0x250
> Nov 17 14:07:39 thoregon kernel: [66571.611266]
> [<ffffffff811f4b39>] xfs_inactive+0xa9/0x480
> Nov 17 14:07:39 thoregon kernel: [66571.611268]
> [<ffffffff811efe10>] xfs_fs_evict_inode+0x70/0x80
> Nov 17 14:07:39 thoregon kernel: [66571.611270]
> [<ffffffff8110cb7f>] evict+0xaf/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.611271]
> [<ffffffff8110d209>] dispose_list+0x39/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611273]
> [<ffffffff8110dc63>] prune_icache_sb+0x183/0x340
> Nov 17 14:07:39 thoregon kernel: [66571.611275]
> [<ffffffff810f5bc3>] prune_super+0xf3/0x1a0
> Nov 17 14:07:39 thoregon kernel: [66571.611277]
> [<ffffffff810bbdee>] shrink_slab+0x11e/0x1f0
> Nov 17 14:07:39 thoregon kernel: [66571.611278]
> [<ffffffff810be98f>] try_to_free_pages+0x21f/0x4e0
> Nov 17 14:07:39 thoregon kernel: [66571.611280]
> [<ffffffff810b5ec6>] __alloc_pages_nodemask+0x506/0x800
> Nov 17 14:07:39 thoregon kernel: [66571.611282]
> [<ffffffff810ce56e>] handle_pte_fault+0x5ae/0x7a0
> Nov 17 14:07:39 thoregon kernel: [66571.611284]
> [<ffffffff810cf769>] handle_mm_fault+0x1f9/0x2a0
> Nov 17 14:07:39 thoregon kernel: [66571.611286]
> [<ffffffff81029dfc>] __do_page_fault+0x16c/0x480
> Nov 17 14:07:39 thoregon kernel: [66571.611287]
> [<ffffffff8102a139>] do_page_fault+0x9/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.611289]
> [<ffffffff816b287f>] page_fault+0x1f/0x30
> Nov 17 14:07:39 thoregon kernel: [66571.611291]
> Nov 17 14:07:39 thoregon kernel: [66571.611292]
> Nov 17 14:07:39 thoregon kernel: [66571.611292] stack backtrace:
> Nov 17 14:07:39 thoregon kernel: [66571.611294] Pid: 21330, comm: cc1
> Not tainted 3.7.0-rc5 #1
> Nov 17 14:07:39 thoregon kernel: [66571.611295] Call Trace:
> Nov 17 14:07:39 thoregon kernel: [66571.611298]  [<ffffffff8107de28>]
> print_irq_inversion_bug.part.37+0x1e8/0x1f0
> Nov 17 14:07:39 thoregon kernel: [66571.611300]  [<ffffffff8107df3b>]
> check_usage_forwards+0x10b/0x140
> Nov 17 14:07:39 thoregon kernel: [66571.611303]  [<ffffffff8107e900>]
> mark_lock+0x190/0x2f0
> Nov 17 14:07:39 thoregon kernel: [66571.611306]  [<ffffffff8150406e>]
> ? dm_request+0x2e/0x2a0
> Nov 17 14:07:39 thoregon kernel: [66571.611308]  [<ffffffff8107de30>]
> ? print_irq_inversion_bug.part.37+0x1f0/0x1f0
> Nov 17 14:07:39 thoregon kernel: [66571.611310]  [<ffffffff8107efdf>]
> __lock_acquire+0x57f/0x1c00
> Nov 17 14:07:39 thoregon kernel: [66571.611313]  [<ffffffff812202f4>]
> ? xfs_iext_bno_to_ext+0x84/0x160
> Nov 17 14:07:39 thoregon kernel: [66571.611316]  [<ffffffff8120a023>]
> ? xfs_bmbt_get_all+0x13/0x20
> Nov 17 14:07:39 thoregon kernel: [66571.611318]  [<ffffffff81200fb4>]
> ? xfs_bmap_search_multi_extents+0xa4/0x110
> Nov 17 14:07:39 thoregon kernel: [66571.611320]  [<ffffffff81080b55>]
> lock_acquire+0x55/0x70
> Nov 17 14:07:39 thoregon kernel: [66571.611322]  [<ffffffff8122b138>]
> ? xfs_trans_alloc+0x28/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611324]  [<ffffffff810f451b>]
> __sb_start_write+0xab/0x190
> Nov 17 14:07:39 thoregon kernel: [66571.611326]  [<ffffffff8122b138>]
> ? xfs_trans_alloc+0x28/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611328]  [<ffffffff8122b138>]
> ? xfs_trans_alloc+0x28/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611330]  [<ffffffff8122b138>]
> xfs_trans_alloc+0x28/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611332]  [<ffffffff811f3e64>]
> xfs_free_eofblocks+0x104/0x250
> Nov 17 14:07:39 thoregon kernel: [66571.611334]  [<ffffffff816b204b>]
> ? _raw_spin_unlock_irq+0x2b/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611336]  [<ffffffff811f4b39>]
> xfs_inactive+0xa9/0x480
> Nov 17 14:07:39 thoregon kernel: [66571.611337]  [<ffffffff816b204b>]
> ? _raw_spin_unlock_irq+0x2b/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611340]  [<ffffffff811efe10>]
> xfs_fs_evict_inode+0x70/0x80
> Nov 17 14:07:39 thoregon kernel: [66571.611342]  [<ffffffff8110cb7f>]
> evict+0xaf/0x1b0
> Nov 17 14:07:39 thoregon kernel: [66571.611344]  [<ffffffff8110d209>]
> dispose_list+0x39/0x50
> Nov 17 14:07:39 thoregon kernel: [66571.611346]  [<ffffffff8110dc63>]
> prune_icache_sb+0x183/0x340
> Nov 17 14:07:39 thoregon kernel: [66571.611347]  [<ffffffff810f5bc3>]
> prune_super+0xf3/0x1a0
> Nov 17 14:07:39 thoregon kernel: [66571.611349]  [<ffffffff810bbdee>]
> shrink_slab+0x11e/0x1f0
> Nov 17 14:07:39 thoregon kernel: [66571.611352]  [<ffffffff810be98f>]
> try_to_free_pages+0x21f/0x4e0
> Nov 17 14:07:39 thoregon kernel: [66571.611354]  [<ffffffff810b5ec6>]
> __alloc_pages_nodemask+0x506/0x800
> Nov 17 14:07:39 thoregon kernel: [66571.611356]  [<ffffffff810b9e40>]
> ? lru_deactivate_fn+0x1c0/0x1c0
> Nov 17 14:07:39 thoregon kernel: [66571.611358]  [<ffffffff810ce56e>]
> handle_pte_fault+0x5ae/0x7a0
> Nov 17 14:07:39 thoregon kernel: [66571.611360]  [<ffffffff810cf769>]
> handle_mm_fault+0x1f9/0x2a0
> Nov 17 14:07:39 thoregon kernel: [66571.611363]  [<ffffffff81029dfc>]
> __do_page_fault+0x16c/0x480
> Nov 17 14:07:39 thoregon kernel: [66571.611366]  [<ffffffff8129c7ad>]
> ? trace_hardirqs_off_thunk+0x3a/0x3c
> Nov 17 14:07:39 thoregon kernel: [66571.611368]  [<ffffffff8102a139>]
> do_page_fault+0x9/0x10
> Nov 17 14:07:39 thoregon kernel: [66571.611370]  [<ffffffff816b287f>]
> page_fault+0x1f/0x30

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
  2012-11-18 15:29         ` Torsten Kaiser
@ 2012-11-18 23:51           ` Dave Chinner
  -1 siblings, 0 replies; 31+ messages in thread
From: Dave Chinner @ 2012-11-18 23:51 UTC (permalink / raw)
  To: Torsten Kaiser; +Cc: xfs, Linux Kernel

On Sun, Nov 18, 2012 at 04:29:22PM +0100, Torsten Kaiser wrote:
> On Sun, Nov 18, 2012 at 11:24 AM, Torsten Kaiser
> <just.for.lkml@googlemail.com> wrote:
> > On Tue, Oct 30, 2012 at 9:37 PM, Torsten Kaiser
> > <just.for.lkml@googlemail.com> wrote:
> >> I will keep LOCKDEP enabled on that system, and if there really is
> >> another splat, I will report back here. But I rather doubt that this
> >> will be needed.
> >
> > After the patch, I did not see this problem again, but today I found
> > another LOCKDEP report that also looks XFS related.
> > I found it twice in the logs, and as both were slightly different, I
> > will attach both versions.
> 
> > Nov  6 21:57:09 thoregon kernel: [ 9941.104353] 3.7.0-rc4 #1 Not tainted
> > Nov  6 21:57:09 thoregon kernel: [ 9941.104355] inconsistent
> > {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
> > Nov  6 21:57:09 thoregon kernel: [ 9941.104430]        CPU0
> > Nov  6 21:57:09 thoregon kernel: [ 9941.104431]        ----
> > Nov  6 21:57:09 thoregon kernel: [ 9941.104432]   lock(&(&ip->i_lock)->mr_lock);
> > Nov  6 21:57:09 thoregon kernel: [ 9941.104433]   <Interrupt>
> > Nov  6 21:57:09 thoregon kernel: [ 9941.104434]
> > lock(&(&ip->i_lock)->mr_lock);
> > Nov  6 21:57:09 thoregon kernel: [ 9941.104435]
> > Nov  6 21:57:09 thoregon kernel: [ 9941.104435]  *** DEADLOCK ***
> 
> Sorry! Copied the wrong report. Your fix only landed in -rc5, so my
> vanilla -rc4 did (also) report the old problem again.
> And I copy&pasted that report instead of the second appearance of the
> new problem.

Can you repost it with line wrapping turned off? The output simply
becomes unreadable when it wraps....

Yeah, I know I can put it back together, but I've got better things
to do with my time than stitch a couple of hundred lines of debug
back into a readable format....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
@ 2012-11-18 23:51           ` Dave Chinner
  0 siblings, 0 replies; 31+ messages in thread
From: Dave Chinner @ 2012-11-18 23:51 UTC (permalink / raw)
  To: Torsten Kaiser; +Cc: Linux Kernel, xfs

On Sun, Nov 18, 2012 at 04:29:22PM +0100, Torsten Kaiser wrote:
> On Sun, Nov 18, 2012 at 11:24 AM, Torsten Kaiser
> <just.for.lkml@googlemail.com> wrote:
> > On Tue, Oct 30, 2012 at 9:37 PM, Torsten Kaiser
> > <just.for.lkml@googlemail.com> wrote:
> >> I will keep LOCKDEP enabled on that system, and if there really is
> >> another splat, I will report back here. But I rather doubt that this
> >> will be needed.
> >
> > After the patch, I did not see this problem again, but today I found
> > another LOCKDEP report that also looks XFS related.
> > I found it twice in the logs, and as both were slightly different, I
> > will attach both versions.
> 
> > Nov  6 21:57:09 thoregon kernel: [ 9941.104353] 3.7.0-rc4 #1 Not tainted
> > Nov  6 21:57:09 thoregon kernel: [ 9941.104355] inconsistent
> > {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
> > Nov  6 21:57:09 thoregon kernel: [ 9941.104430]        CPU0
> > Nov  6 21:57:09 thoregon kernel: [ 9941.104431]        ----
> > Nov  6 21:57:09 thoregon kernel: [ 9941.104432]   lock(&(&ip->i_lock)->mr_lock);
> > Nov  6 21:57:09 thoregon kernel: [ 9941.104433]   <Interrupt>
> > Nov  6 21:57:09 thoregon kernel: [ 9941.104434]
> > lock(&(&ip->i_lock)->mr_lock);
> > Nov  6 21:57:09 thoregon kernel: [ 9941.104435]
> > Nov  6 21:57:09 thoregon kernel: [ 9941.104435]  *** DEADLOCK ***
> 
> Sorry! Copied the wrong report. Your fix only landed in -rc5, so my
> vanilla -rc4 did (also) report the old problem again.
> And I copy&pasted that report instead of the second appearance of the
> new problem.

Can you repost it with line wrapping turned off? The output simply
becomes unreadable when it wraps....

Yeah, I know I can put it back together, but I've got better things
to do with my time than stitch a couple of hundred lines of debug
back into a readable format....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
  2012-11-18 23:51           ` Dave Chinner
@ 2012-11-19  6:50             ` Torsten Kaiser
  -1 siblings, 0 replies; 31+ messages in thread
From: Torsten Kaiser @ 2012-11-19  6:50 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs, Linux Kernel

[-- Attachment #1: Type: text/plain, Size: 2121 bytes --]

On Mon, Nov 19, 2012 at 12:51 AM, Dave Chinner <david@fromorbit.com> wrote:
> On Sun, Nov 18, 2012 at 04:29:22PM +0100, Torsten Kaiser wrote:
>> On Sun, Nov 18, 2012 at 11:24 AM, Torsten Kaiser
>> <just.for.lkml@googlemail.com> wrote:
>> > On Tue, Oct 30, 2012 at 9:37 PM, Torsten Kaiser
>> > <just.for.lkml@googlemail.com> wrote:
>> >> I will keep LOCKDEP enabled on that system, and if there really is
>> >> another splat, I will report back here. But I rather doubt that this
>> >> will be needed.
>> >
>> > After the patch, I did not see this problem again, but today I found
>> > another LOCKDEP report that also looks XFS related.
>> > I found it twice in the logs, and as both were slightly different, I
>> > will attach both versions.
>>
>> > Nov  6 21:57:09 thoregon kernel: [ 9941.104353] 3.7.0-rc4 #1 Not tainted
>> > Nov  6 21:57:09 thoregon kernel: [ 9941.104355] inconsistent
>> > {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
>> > Nov  6 21:57:09 thoregon kernel: [ 9941.104430]        CPU0
>> > Nov  6 21:57:09 thoregon kernel: [ 9941.104431]        ----
>> > Nov  6 21:57:09 thoregon kernel: [ 9941.104432]   lock(&(&ip->i_lock)->mr_lock);
>> > Nov  6 21:57:09 thoregon kernel: [ 9941.104433]   <Interrupt>
>> > Nov  6 21:57:09 thoregon kernel: [ 9941.104434]
>> > lock(&(&ip->i_lock)->mr_lock);
>> > Nov  6 21:57:09 thoregon kernel: [ 9941.104435]
>> > Nov  6 21:57:09 thoregon kernel: [ 9941.104435]  *** DEADLOCK ***
>>
>> Sorry! Copied the wrong report. Your fix only landed in -rc5, so my
>> vanilla -rc4 did (also) report the old problem again.
>> And I copy&pasted that report instead of the second appearance of the
>> new problem.
>
> Can you repost it with line wrapping turned off? The output simply
> becomes unreadable when it wraps....
>
> Yeah, I know I can put it back together, but I've got better things
> to do with my time than stitch a couple of hundred lines of debug
> back into a readable format....

Sorry about that, but I can't find any option to turn that off in Gmail.

I have added the reports as attachment, I hope thats OK for you.

Thanks for looking into this.

Torsten

[-- Attachment #2: lockdep-reports.txt --]
[-- Type: text/plain, Size: 36869 bytes --]

[110926.972477] 
[110926.972482] =========================================================
[110926.972484] [ INFO: possible irq lock inversion dependency detected ]
[110926.972486] 3.7.0-rc4 #1 Not tainted
[110926.972487] ---------------------------------------------------------
[110926.972489] kswapd0/725 just changed the state of lock:
[110926.972490]  (sb_internal){.+.+.?}, at: [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
[110926.972499] but this lock took another, RECLAIM_FS-unsafe lock in the past:
[110926.972500]  (&(&ip->i_lock)->mr_lock/1){+.+.+.}
[110926.972500] 
[110926.972500] and interrupts could create inverse lock ordering between them.
[110926.972500] 
[110926.972503] 
[110926.972503] other info that might help us debug this:
[110926.972504]  Possible interrupt unsafe locking scenario:
[110926.972504] 
[110926.972505]        CPU0                    CPU1
[110926.972506]        ----                    ----
[110926.972507]   lock(&(&ip->i_lock)->mr_lock/1);
[110926.972509]                                local_irq_disable();
[110926.972509]                                lock(sb_internal);
[110926.972511]                                lock(&(&ip->i_lock)->mr_lock/1);
[110926.972512]   <Interrupt>
[110926.972513]     lock(sb_internal);
[110926.972514] 
[110926.972514]  *** DEADLOCK ***
[110926.972514] 
[110926.972516] 2 locks held by kswapd0/725:
[110926.972517]  #0:  (shrinker_rwsem){++++..}, at: [<ffffffff810bbd22>] shrink_slab+0x32/0x1f0
[110926.972522]  #1:  (&type->s_umount_key#20){++++.+}, at: [<ffffffff810f5a8e>] grab_super_passive+0x3e/0x90
[110926.972527] 
[110926.972527] the shortest dependencies between 2nd lock and 1st lock:
[110926.972533]  -> (&(&ip->i_lock)->mr_lock/1){+.+.+.} ops: 58117 {
[110926.972536]     HARDIRQ-ON-W at:
[110926.972537]                       [<ffffffff8107f091>] __lock_acquire+0x631/0x1c00
[110926.972540]                       [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972542]                       [<ffffffff8106126a>] down_write_nested+0x4a/0x70
[110926.972545]                       [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[110926.972548]                       [<ffffffff811f5194>] xfs_create+0x1d4/0x5a0
[110926.972550]                       [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972552]                       [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972554]                       [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972556]                       [<ffffffff81100b8e>] do_last.isra.69+0x80e/0xc80
[110926.972558]                       [<ffffffff811010ab>] path_openat.isra.70+0xab/0x490
[110926.972560]                       [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972562]                       [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972565]                       [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972567]                       [<ffffffff816b2d12>] system_call_fastpath+0x16/0x1b
[110926.972570]     SOFTIRQ-ON-W at:
[110926.972571]                       [<ffffffff8107f0c7>] __lock_acquire+0x667/0x1c00
[110926.972573]                       [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972574]                       [<ffffffff8106126a>] down_write_nested+0x4a/0x70
[110926.972576]                       [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[110926.972578]                       [<ffffffff811f5194>] xfs_create+0x1d4/0x5a0
[110926.972580]                       [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972581]                       [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972583]                       [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972585]                       [<ffffffff81100b8e>] do_last.isra.69+0x80e/0xc80
[110926.972587]                       [<ffffffff811010ab>] path_openat.isra.70+0xab/0x490
[110926.972589]                       [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972591]                       [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972593]                       [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972595]                       [<ffffffff816b2d12>] system_call_fastpath+0x16/0x1b
[110926.972597]     RECLAIM_FS-ON-W at:
[110926.972598]                          [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
[110926.972600]                          [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
[110926.972601]                          [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
[110926.972603]                          [<ffffffff810dba31>] vm_map_ram+0x271/0x770
[110926.972606]                          [<ffffffff811e1316>] _xfs_buf_map_pages+0x46/0xe0
[110926.972609]                          [<ffffffff811e222a>] xfs_buf_get_map+0x8a/0x130
[110926.972610]                          [<ffffffff81233ab9>] xfs_trans_get_buf_map+0xa9/0xd0
[110926.972613]                          [<ffffffff8121bced>] xfs_ialloc_inode_init+0xcd/0x1d0
[110926.972616]                          [<ffffffff8121c25e>] xfs_ialloc_ag_alloc+0x1be/0x560
[110926.972618]                          [<ffffffff8121da65>] xfs_dialloc+0x185/0x2a0
[110926.972619]                          [<ffffffff8121f198>] xfs_ialloc+0x58/0x650
[110926.972621]                          [<ffffffff811f3985>] xfs_dir_ialloc+0x65/0x270
[110926.972623]                          [<ffffffff811f536c>] xfs_create+0x3ac/0x5a0
[110926.972624]                          [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972626]                          [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972628]                          [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972630]                          [<ffffffff81100b8e>] do_last.isra.69+0x80e/0xc80
[110926.972632]                          [<ffffffff811010ab>] path_openat.isra.70+0xab/0x490
[110926.972634]                          [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972636]                          [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972638]                          [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972640]                          [<ffffffff816b2d12>] system_call_fastpath+0x16/0x1b
[110926.972642]     INITIAL USE at:
[110926.972642]                      [<ffffffff8107ed49>] __lock_acquire+0x2e9/0x1c00
[110926.972644]                      [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972646]                      [<ffffffff8106126a>] down_write_nested+0x4a/0x70
[110926.972648]                      [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[110926.972650]                      [<ffffffff811f5194>] xfs_create+0x1d4/0x5a0
[110926.972651]                      [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972653]                      [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972655]                      [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972657]                      [<ffffffff81100b8e>] do_last.isra.69+0x80e/0xc80
[110926.972659]                      [<ffffffff811010ab>] path_openat.isra.70+0xab/0x490
[110926.972661]                      [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972663]                      [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972664]                      [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972666]                      [<ffffffff816b2d12>] system_call_fastpath+0x16/0x1b
[110926.972668]   }
[110926.972669]   ... key      at: [<ffffffff825b4b81>] __key.41355+0x1/0x8
[110926.972672]   ... acquired at:
[110926.972672]    [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972674]    [<ffffffff8106126a>] down_write_nested+0x4a/0x70
[110926.972676]    [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[110926.972678]    [<ffffffff811f5194>] xfs_create+0x1d4/0x5a0
[110926.972679]    [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972681]    [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972683]    [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972684]    [<ffffffff81100b8e>] do_last.isra.69+0x80e/0xc80
[110926.972686]    [<ffffffff811010ab>] path_openat.isra.70+0xab/0x490
[110926.972688]    [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972690]    [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972692]    [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972694]    [<ffffffff816b2d12>] system_call_fastpath+0x16/0x1b
[110926.972696] 
[110926.972696] -> (sb_internal){.+.+.?} ops: 1710064 {
[110926.972699]    HARDIRQ-ON-R at:
[110926.972700]                     [<ffffffff8107ef6a>] __lock_acquire+0x50a/0x1c00
[110926.972702]                     [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972704]                     [<ffffffff810f452b>] __sb_start_write+0xab/0x190
[110926.972705]                     [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
[110926.972707]                     [<ffffffff811f5147>] xfs_create+0x187/0x5a0
[110926.972709]                     [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972711]                     [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972712]                     [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972714]                     [<ffffffff81100b8e>] do_last.isra.69+0x80e/0xc80
[110926.972716]                     [<ffffffff811010ab>] path_openat.isra.70+0xab/0x490
[110926.972718]                     [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972720]                     [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972722]                     [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972724]                     [<ffffffff816b2d12>] system_call_fastpath+0x16/0x1b
[110926.972726]    SOFTIRQ-ON-R at:
[110926.972727]                     [<ffffffff8107f0c7>] __lock_acquire+0x667/0x1c00
[110926.972728]                     [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972730]                     [<ffffffff810f452b>] __sb_start_write+0xab/0x190
[110926.972732]                     [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
[110926.972734]                     [<ffffffff811f5147>] xfs_create+0x187/0x5a0
[110926.972735]                     [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972737]                     [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972739]                     [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972741]                     [<ffffffff81100b8e>] do_last.isra.69+0x80e/0xc80
[110926.972743]                     [<ffffffff811010ab>] path_openat.isra.70+0xab/0x490
[110926.972745]                     [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972747]                     [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972749]                     [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972750]                     [<ffffffff816b2d12>] system_call_fastpath+0x16/0x1b
[110926.972752]    IN-RECLAIM_FS-R at:
[110926.972753]                        [<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
[110926.972755]                        [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972757]                        [<ffffffff810f452b>] __sb_start_write+0xab/0x190
[110926.972758]                        [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
[110926.972760]                        [<ffffffff811f3e54>] xfs_free_eofblocks+0x104/0x250
[110926.972762]                        [<ffffffff811f4b29>] xfs_inactive+0xa9/0x480
[110926.972763]                        [<ffffffff811efe00>] xfs_fs_evict_inode+0x70/0x80
[110926.972765]                        [<ffffffff8110cb8f>] evict+0xaf/0x1b0
[110926.972768]                        [<ffffffff8110d219>] dispose_list+0x39/0x50
[110926.972770]                        [<ffffffff8110dc73>] prune_icache_sb+0x183/0x340
[110926.972772]                        [<ffffffff810f5bd3>] prune_super+0xf3/0x1a0
[110926.972773]                        [<ffffffff810bbe0e>] shrink_slab+0x11e/0x1f0
[110926.972775]                        [<ffffffff810be400>] kswapd+0x690/0xa10
[110926.972777]                        [<ffffffff8105c246>] kthread+0xd6/0xe0
[110926.972779]                        [<ffffffff816b2c6c>] ret_from_fork+0x7c/0xb0
[110926.972781]    RECLAIM_FS-ON-R at:
[110926.972782]                        [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
[110926.972784]                        [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
[110926.972785]                        [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
[110926.972787]                        [<ffffffff811f6d3f>] kmem_zone_alloc+0x5f/0xe0
[110926.972789]                        [<ffffffff811f6dd8>] kmem_zone_zalloc+0x18/0x50
[110926.972790]                        [<ffffffff8122b1e2>] _xfs_trans_alloc+0x32/0x90
[110926.972792]                        [<ffffffff8122b278>] xfs_trans_alloc+0x38/0x50
[110926.972794]                        [<ffffffff811f5147>] xfs_create+0x187/0x5a0
[110926.972796]                        [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972797]                        [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972799]                        [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972801]                        [<ffffffff81100b8e>] do_last.isra.69+0x80e/0xc80
[110926.972803]                        [<ffffffff811010ab>] path_openat.isra.70+0xab/0x490
[110926.972805]                        [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972807]                        [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972809]                        [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972811]                        [<ffffffff816b2d12>] system_call_fastpath+0x16/0x1b
[110926.972813]    INITIAL USE at:
[110926.972814]                    [<ffffffff8107ed49>] __lock_acquire+0x2e9/0x1c00
[110926.972815]                    [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972817]                    [<ffffffff810f452b>] __sb_start_write+0xab/0x190
[110926.972819]                    [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
[110926.972820]                    [<ffffffff811f5147>] xfs_create+0x187/0x5a0
[110926.972822]                    [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972824]                    [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972826]                    [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972827]                    [<ffffffff81100b8e>] do_last.isra.69+0x80e/0xc80
[110926.972829]                    [<ffffffff811010ab>] path_openat.isra.70+0xab/0x490
[110926.972831]                    [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972833]                    [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972835]                    [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972837]                    [<ffffffff816b2d12>] system_call_fastpath+0x16/0x1b
[110926.972839]  }
[110926.972840]  ... key      at: [<ffffffff81c34e40>] xfs_fs_type+0x60/0x80
[110926.972842]  ... acquired at:
[110926.972843]    [<ffffffff8107df3b>] check_usage_forwards+0x10b/0x140
[110926.972845]    [<ffffffff8107e900>] mark_lock+0x190/0x2f0
[110926.972846]    [<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
[110926.972848]    [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972850]    [<ffffffff810f452b>] __sb_start_write+0xab/0x190
[110926.972851]    [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
[110926.972853]    [<ffffffff811f3e54>] xfs_free_eofblocks+0x104/0x250
[110926.972855]    [<ffffffff811f4b29>] xfs_inactive+0xa9/0x480
[110926.972856]    [<ffffffff811efe00>] xfs_fs_evict_inode+0x70/0x80
[110926.972858]    [<ffffffff8110cb8f>] evict+0xaf/0x1b0
[110926.972860]    [<ffffffff8110d219>] dispose_list+0x39/0x50
[110926.972861]    [<ffffffff8110dc73>] prune_icache_sb+0x183/0x340
[110926.972863]    [<ffffffff810f5bd3>] prune_super+0xf3/0x1a0
[110926.972865]    [<ffffffff810bbe0e>] shrink_slab+0x11e/0x1f0
[110926.972866]    [<ffffffff810be400>] kswapd+0x690/0xa10
[110926.972868]    [<ffffffff8105c246>] kthread+0xd6/0xe0
[110926.972870]    [<ffffffff816b2c6c>] ret_from_fork+0x7c/0xb0
[110926.972871] 
[110926.972872] 
[110926.972872] stack backtrace:
[110926.972874] Pid: 725, comm: kswapd0 Not tainted 3.7.0-rc4 #1
[110926.972875] Call Trace:
[110926.972878]  [<ffffffff8107de28>] print_irq_inversion_bug.part.37+0x1e8/0x1f0
[110926.972880]  [<ffffffff8107df3b>] check_usage_forwards+0x10b/0x140
[110926.972883]  [<ffffffff8107e900>] mark_lock+0x190/0x2f0
[110926.972885]  [<ffffffff8107de30>] ? print_irq_inversion_bug.part.37+0x1f0/0x1f0
[110926.972887]  [<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
[110926.972889]  [<ffffffff81220424>] ? xfs_iext_bno_to_ext+0x84/0x160
[110926.972892]  [<ffffffff8120a0e3>] ? xfs_bmbt_get_all+0x13/0x20
[110926.972895]  [<ffffffff81201104>] ? xfs_bmap_search_multi_extents+0xa4/0x110
[110926.972897]  [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972899]  [<ffffffff8122b268>] ? xfs_trans_alloc+0x28/0x50
[110926.972901]  [<ffffffff810f452b>] __sb_start_write+0xab/0x190
[110926.972903]  [<ffffffff8122b268>] ? xfs_trans_alloc+0x28/0x50
[110926.972905]  [<ffffffff8122b268>] ? xfs_trans_alloc+0x28/0x50
[110926.972907]  [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
[110926.972908]  [<ffffffff811f3e54>] xfs_free_eofblocks+0x104/0x250
[110926.972910]  [<ffffffff816b1efb>] ? _raw_spin_unlock_irq+0x2b/0x50
[110926.972912]  [<ffffffff811f4b29>] xfs_inactive+0xa9/0x480
[110926.972914]  [<ffffffff816b1efb>] ? _raw_spin_unlock_irq+0x2b/0x50
[110926.972916]  [<ffffffff811efe00>] xfs_fs_evict_inode+0x70/0x80
[110926.972918]  [<ffffffff8110cb8f>] evict+0xaf/0x1b0
[110926.972920]  [<ffffffff8110d219>] dispose_list+0x39/0x50
[110926.972922]  [<ffffffff8110dc73>] prune_icache_sb+0x183/0x340
[110926.972924]  [<ffffffff810f5bd3>] prune_super+0xf3/0x1a0
[110926.972926]  [<ffffffff810bbe0e>] shrink_slab+0x11e/0x1f0
[110926.972928]  [<ffffffff810be400>] kswapd+0x690/0xa10
[110926.972930]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
[110926.972932]  [<ffffffff810bdd70>] ? shrink_lruvec+0x540/0x540
[110926.972934]  [<ffffffff8105c246>] kthread+0xd6/0xe0
[110926.972936]  [<ffffffff816b1efb>] ? _raw_spin_unlock_irq+0x2b/0x50
[110926.972938]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
[110926.972940]  [<ffffffff816b2c6c>] ret_from_fork+0x7c/0xb0
[110926.972942]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0

[66571.610863] 
[66571.610869] =========================================================
[66571.610870] [ INFO: possible irq lock inversion dependency detected ]
[66571.610873] 3.7.0-rc5 #1 Not tainted
[66571.610874] ---------------------------------------------------------
[66571.610875] cc1/21330 just changed the state of lock:
[66571.610877]  (sb_internal){.+.+.?}, at: [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
[66571.610885] but this lock took another, RECLAIM_FS-unsafe lock in the past:
[66571.610886]  (&(&ip->i_lock)->mr_lock/1){+.+.+.}
[66571.610886] 
[66571.610886] and interrupts could create inverse lock ordering between them.
[66571.610886] 
[66571.610890] 
[66571.610890] other info that might help us debug this:
[66571.610891]  Possible interrupt unsafe locking scenario:
[66571.610891] 
[66571.610892]        CPU0                    CPU1
[66571.610893]        ----                    ----
[66571.610894]   lock(&(&ip->i_lock)->mr_lock/1);
[66571.610896]                                local_irq_disable();
[66571.610897]                                lock(sb_internal);
[66571.610898]                                lock(&(&ip->i_lock)->mr_lock/1);
[66571.610900]   <Interrupt>
[66571.610901]     lock(sb_internal);
[66571.610902] 
[66571.610902]  *** DEADLOCK ***
[66571.610902] 
[66571.610904] 3 locks held by cc1/21330:
[66571.610905]  #0:  (&mm->mmap_sem){++++++}, at: [<ffffffff81029d8b>] __do_page_fault+0xfb/0x480
[66571.610910]  #1:  (shrinker_rwsem){++++..}, at: [<ffffffff810bbd02>] shrink_slab+0x32/0x1f0
[66571.610915]  #2:  (&type->s_umount_key#20){++++.+}, at: [<ffffffff810f5a7e>] grab_super_passive+0x3e/0x90
[66571.610921] 
[66571.610921] the shortest dependencies between 2nd lock and 1st lock:
[66571.610927]  -> (&(&ip->i_lock)->mr_lock/1){+.+.+.} ops: 169649 {
[66571.610931]     HARDIRQ-ON-W at:
[66571.610932]                       [<ffffffff8107f091>] __lock_acquire+0x631/0x1c00
[66571.610935]                       [<ffffffff81080b55>] lock_acquire+0x55/0x70
[66571.610937]                       [<ffffffff8106126a>] down_write_nested+0x4a/0x70
[66571.610941]                       [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[66571.610944]                       [<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
[66571.610946]                       [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
[66571.610948]                       [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
[66571.610950]                       [<ffffffff81100322>] vfs_create+0x72/0xc0
[66571.610953]                       [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
[66571.610955]                       [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
[66571.610957]                       [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
[66571.610959]                       [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
[66571.610962]                       [<ffffffff810f222c>] sys_open+0x1c/0x20
[66571.610964]                       [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
[66571.610967]     SOFTIRQ-ON-W at:
[66571.610968]                       [<ffffffff8107f0c7>] __lock_acquire+0x667/0x1c00
[66571.610970]                       [<ffffffff81080b55>] lock_acquire+0x55/0x70
[66571.610972]                       [<ffffffff8106126a>] down_write_nested+0x4a/0x70
[66571.610974]                       [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[66571.610976]                       [<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
[66571.610977]                       [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
[66571.610979]                       [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
[66571.610981]                       [<ffffffff81100322>] vfs_create+0x72/0xc0
[66571.610983]                       [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
[66571.610985]                       [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
[66571.610987]                       [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
[66571.610989]                       [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
[66571.610991]                       [<ffffffff810f222c>] sys_open+0x1c/0x20
[66571.610993]                       [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
[66571.610995]     RECLAIM_FS-ON-W at:
[66571.610996]                          [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
[66571.610998]                          [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
[66571.610999]                          [<ffffffff810e9dc5>] kmem_cache_alloc+0x35/0xe0
[66571.611002]                          [<ffffffff810dba21>] vm_map_ram+0x271/0x770
[66571.611004]                          [<ffffffff811e12b6>] _xfs_buf_map_pages+0x46/0xe0
[66571.611008]                          [<ffffffff811e21ca>] xfs_buf_get_map+0x8a/0x130
[66571.611009]                          [<ffffffff81233989>] xfs_trans_get_buf_map+0xa9/0xd0
[66571.611011]                          [<ffffffff8121bc2d>] xfs_ialloc_inode_init+0xcd/0x1d0
[66571.611015]                          [<ffffffff8121c16f>] xfs_ialloc_ag_alloc+0x18f/0x500
[66571.611017]                          [<ffffffff8121d955>] xfs_dialloc+0x185/0x2a0
[66571.611019]                          [<ffffffff8121f068>] xfs_ialloc+0x58/0x650
[66571.611021]                          [<ffffffff811f3995>] xfs_dir_ialloc+0x65/0x270
[66571.611023]                          [<ffffffff811f537c>] xfs_create+0x3ac/0x5a0
[66571.611024]                          [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
[66571.611026]                          [<ffffffff811ecb61>] xfs_vn_mkdir+0x11/0x20
[66571.611028]                          [<ffffffff8110016f>] vfs_mkdir+0x7f/0xd0
[66571.611030]                          [<ffffffff81101b83>] sys_mkdirat+0x43/0x80
[66571.611032]                          [<ffffffff81101bd4>] sys_mkdir+0x14/0x20
[66571.611034]                          [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
[66571.611036]     INITIAL USE at:
[66571.611037]                      [<ffffffff8107ed49>] __lock_acquire+0x2e9/0x1c00
[66571.611038]                      [<ffffffff81080b55>] lock_acquire+0x55/0x70
[66571.611040]                      [<ffffffff8106126a>] down_write_nested+0x4a/0x70
[66571.611042]                      [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[66571.611044]                      [<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
[66571.611046]                      [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
[66571.611047]                      [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
[66571.611049]                      [<ffffffff81100322>] vfs_create+0x72/0xc0
[66571.611051]                      [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
[66571.611053]                      [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
[66571.611055]                      [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
[66571.611057]                      [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
[66571.611059]                      [<ffffffff810f222c>] sys_open+0x1c/0x20
[66571.611061]                      [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
[66571.611063]   }
[66571.611064]   ... key      at: [<ffffffff825b4b81>] __key.41357+0x1/0x8
[66571.611066]   ... acquired at:
[66571.611067]    [<ffffffff81080b55>] lock_acquire+0x55/0x70
[66571.611069]    [<ffffffff8106126a>] down_write_nested+0x4a/0x70
[66571.611071]    [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[66571.611073]    [<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
[66571.611074]    [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
[66571.611076]    [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
[66571.611078]    [<ffffffff81100322>] vfs_create+0x72/0xc0
[66571.611080]    [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
[66571.611082]    [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
[66571.611084]    [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
[66571.611086]    [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
[66571.611088]    [<ffffffff810f222c>] sys_open+0x1c/0x20
[66571.611090]    [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
[66571.611091] 
[66571.611092] -> (sb_internal){.+.+.?} ops: 1341531 {
[66571.611095]    HARDIRQ-ON-R at:
[66571.611096]                     [<ffffffff8107ef6a>] __lock_acquire+0x50a/0x1c00
[66571.611098]                     [<ffffffff81080b55>] lock_acquire+0x55/0x70
[66571.611100]                     [<ffffffff810f451b>] __sb_start_write+0xab/0x190
[66571.611102]                     [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
[66571.611104]                     [<ffffffff811f5157>] xfs_create+0x187/0x5a0
[66571.611105]                     [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
[66571.611107]                     [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
[66571.611109]                     [<ffffffff81100322>] vfs_create+0x72/0xc0
[66571.611111]                     [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
[66571.611113]                     [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
[66571.611115]                     [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
[66571.611117]                     [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
[66571.611119]                     [<ffffffff810f222c>] sys_open+0x1c/0x20
[66571.611121]                     [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
[66571.611123]    SOFTIRQ-ON-R at:
[66571.611124]                     [<ffffffff8107f0c7>] __lock_acquire+0x667/0x1c00
[66571.611126]                     [<ffffffff81080b55>] lock_acquire+0x55/0x70
[66571.611128]                     [<ffffffff810f451b>] __sb_start_write+0xab/0x190
[66571.611130]                     [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
[66571.611132]                     [<ffffffff811f5157>] xfs_create+0x187/0x5a0
[66571.611133]                     [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
[66571.611135]                     [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
[66571.611137]                     [<ffffffff81100322>] vfs_create+0x72/0xc0
[66571.611139]                     [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
[66571.611141]                     [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
[66571.611143]                     [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
[66571.611145]                     [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
[66571.611147]                     [<ffffffff810f222c>] sys_open+0x1c/0x20
[66571.611149]                     [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
[66571.611151]    IN-RECLAIM_FS-R at:
[66571.611152]                        [<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
[66571.611154]                        [<ffffffff81080b55>] lock_acquire+0x55/0x70
[66571.611156]                        [<ffffffff810f451b>] __sb_start_write+0xab/0x190
[66571.611158]                        [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
[66571.611159]                        [<ffffffff811f3e64>] xfs_free_eofblocks+0x104/0x250
[66571.611161]                        [<ffffffff811f4b39>] xfs_inactive+0xa9/0x480
[66571.611163]                        [<ffffffff811efe10>] xfs_fs_evict_inode+0x70/0x80
[66571.611165]                        [<ffffffff8110cb7f>] evict+0xaf/0x1b0
[66571.611168]                        [<ffffffff8110d209>] dispose_list+0x39/0x50
[66571.611170]                        [<ffffffff8110dc63>] prune_icache_sb+0x183/0x340
[66571.611172]                        [<ffffffff810f5bc3>] prune_super+0xf3/0x1a0
[66571.611173]                        [<ffffffff810bbdee>] shrink_slab+0x11e/0x1f0
[66571.611175]                        [<ffffffff810be98f>] try_to_free_pages+0x21f/0x4e0
[66571.611177]                        [<ffffffff810b5ec6>] __alloc_pages_nodemask+0x506/0x800
[66571.611179]                        [<ffffffff810ce56e>] handle_pte_fault+0x5ae/0x7a0
[66571.611182]                        [<ffffffff810cf769>] handle_mm_fault+0x1f9/0x2a0
[66571.611184]                        [<ffffffff81029dfc>] __do_page_fault+0x16c/0x480
[66571.611186]                        [<ffffffff8102a139>] do_page_fault+0x9/0x10
[66571.611188]                        [<ffffffff816b287f>] page_fault+0x1f/0x30
[66571.611190]    RECLAIM_FS-ON-R at:
[66571.611191]                        [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
[66571.611193]                        [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
[66571.611195]                        [<ffffffff810e9dc5>] kmem_cache_alloc+0x35/0xe0
[66571.611197]                        [<ffffffff811f6d4f>] kmem_zone_alloc+0x5f/0xe0
[66571.611198]                        [<ffffffff811f6de8>] kmem_zone_zalloc+0x18/0x50
[66571.611200]                        [<ffffffff8122b0b2>] _xfs_trans_alloc+0x32/0x90
[66571.611202]                        [<ffffffff8122b148>] xfs_trans_alloc+0x38/0x50
[66571.611204]                        [<ffffffff811f5157>] xfs_create+0x187/0x5a0
[66571.611205]                        [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
[66571.611207]                        [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
[66571.611209]                        [<ffffffff81100322>] vfs_create+0x72/0xc0
[66571.611211]                        [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
[66571.611213]                        [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
[66571.611215]                        [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
[66571.611217]                        [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
[66571.611219]                        [<ffffffff810f222c>] sys_open+0x1c/0x20
[66571.611221]                        [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
[66571.611223]    INITIAL USE at:
[66571.611224]                    [<ffffffff8107ed49>] __lock_acquire+0x2e9/0x1c00
[66571.611225]                    [<ffffffff81080b55>] lock_acquire+0x55/0x70
[66571.611227]                    [<ffffffff810f451b>] __sb_start_write+0xab/0x190
[66571.611229]                    [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
[66571.611231]                    [<ffffffff811f5157>] xfs_create+0x187/0x5a0
[66571.611232]                    [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
[66571.611234]                    [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
[66571.611236]                    [<ffffffff81100322>] vfs_create+0x72/0xc0
[66571.611238]                    [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
[66571.611240]                    [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
[66571.611242]                    [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
[66571.611244]                    [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
[66571.611246]                    [<ffffffff810f222c>] sys_open+0x1c/0x20
[66571.611248]                    [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
[66571.611250]  }
[66571.611251]  ... key      at: [<ffffffff81c34e40>] xfs_fs_type+0x60/0x80
[66571.611254]  ... acquired at:
[66571.611254]    [<ffffffff8107df3b>] check_usage_forwards+0x10b/0x140
[66571.611256]    [<ffffffff8107e900>] mark_lock+0x190/0x2f0
[66571.611258]    [<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
[66571.611260]    [<ffffffff81080b55>] lock_acquire+0x55/0x70
[66571.611261]    [<ffffffff810f451b>] __sb_start_write+0xab/0x190
[66571.611263]    [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
[66571.611265]    [<ffffffff811f3e64>] xfs_free_eofblocks+0x104/0x250
[66571.611266]    [<ffffffff811f4b39>] xfs_inactive+0xa9/0x480
[66571.611268]    [<ffffffff811efe10>] xfs_fs_evict_inode+0x70/0x80
[66571.611270]    [<ffffffff8110cb7f>] evict+0xaf/0x1b0
[66571.611271]    [<ffffffff8110d209>] dispose_list+0x39/0x50
[66571.611273]    [<ffffffff8110dc63>] prune_icache_sb+0x183/0x340
[66571.611275]    [<ffffffff810f5bc3>] prune_super+0xf3/0x1a0
[66571.611277]    [<ffffffff810bbdee>] shrink_slab+0x11e/0x1f0
[66571.611278]    [<ffffffff810be98f>] try_to_free_pages+0x21f/0x4e0
[66571.611280]    [<ffffffff810b5ec6>] __alloc_pages_nodemask+0x506/0x800
[66571.611282]    [<ffffffff810ce56e>] handle_pte_fault+0x5ae/0x7a0
[66571.611284]    [<ffffffff810cf769>] handle_mm_fault+0x1f9/0x2a0
[66571.611286]    [<ffffffff81029dfc>] __do_page_fault+0x16c/0x480
[66571.611287]    [<ffffffff8102a139>] do_page_fault+0x9/0x10
[66571.611289]    [<ffffffff816b287f>] page_fault+0x1f/0x30
[66571.611291] 
[66571.611292] 
[66571.611292] stack backtrace:
[66571.611294] Pid: 21330, comm: cc1 Not tainted 3.7.0-rc5 #1
[66571.611295] Call Trace:
[66571.611298]  [<ffffffff8107de28>] print_irq_inversion_bug.part.37+0x1e8/0x1f0
[66571.611300]  [<ffffffff8107df3b>] check_usage_forwards+0x10b/0x140
[66571.611303]  [<ffffffff8107e900>] mark_lock+0x190/0x2f0
[66571.611306]  [<ffffffff8150406e>] ? dm_request+0x2e/0x2a0
[66571.611308]  [<ffffffff8107de30>] ? print_irq_inversion_bug.part.37+0x1f0/0x1f0
[66571.611310]  [<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
[66571.611313]  [<ffffffff812202f4>] ? xfs_iext_bno_to_ext+0x84/0x160
[66571.611316]  [<ffffffff8120a023>] ? xfs_bmbt_get_all+0x13/0x20
[66571.611318]  [<ffffffff81200fb4>] ? xfs_bmap_search_multi_extents+0xa4/0x110
[66571.611320]  [<ffffffff81080b55>] lock_acquire+0x55/0x70
[66571.611322]  [<ffffffff8122b138>] ? xfs_trans_alloc+0x28/0x50
[66571.611324]  [<ffffffff810f451b>] __sb_start_write+0xab/0x190
[66571.611326]  [<ffffffff8122b138>] ? xfs_trans_alloc+0x28/0x50
[66571.611328]  [<ffffffff8122b138>] ? xfs_trans_alloc+0x28/0x50
[66571.611330]  [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
[66571.611332]  [<ffffffff811f3e64>] xfs_free_eofblocks+0x104/0x250
[66571.611334]  [<ffffffff816b204b>] ? _raw_spin_unlock_irq+0x2b/0x50
[66571.611336]  [<ffffffff811f4b39>] xfs_inactive+0xa9/0x480
[66571.611337]  [<ffffffff816b204b>] ? _raw_spin_unlock_irq+0x2b/0x50
[66571.611340]  [<ffffffff811efe10>] xfs_fs_evict_inode+0x70/0x80
[66571.611342]  [<ffffffff8110cb7f>] evict+0xaf/0x1b0
[66571.611344]  [<ffffffff8110d209>] dispose_list+0x39/0x50
[66571.611346]  [<ffffffff8110dc63>] prune_icache_sb+0x183/0x340
[66571.611347]  [<ffffffff810f5bc3>] prune_super+0xf3/0x1a0
[66571.611349]  [<ffffffff810bbdee>] shrink_slab+0x11e/0x1f0
[66571.611352]  [<ffffffff810be98f>] try_to_free_pages+0x21f/0x4e0
[66571.611354]  [<ffffffff810b5ec6>] __alloc_pages_nodemask+0x506/0x800
[66571.611356]  [<ffffffff810b9e40>] ? lru_deactivate_fn+0x1c0/0x1c0
[66571.611358]  [<ffffffff810ce56e>] handle_pte_fault+0x5ae/0x7a0
[66571.611360]  [<ffffffff810cf769>] handle_mm_fault+0x1f9/0x2a0
[66571.611363]  [<ffffffff81029dfc>] __do_page_fault+0x16c/0x480
[66571.611366]  [<ffffffff8129c7ad>] ? trace_hardirqs_off_thunk+0x3a/0x3c
[66571.611368]  [<ffffffff8102a139>] do_page_fault+0x9/0x10
[66571.611370]  [<ffffffff816b287f>] page_fault+0x1f/0x30

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
@ 2012-11-19  6:50             ` Torsten Kaiser
  0 siblings, 0 replies; 31+ messages in thread
From: Torsten Kaiser @ 2012-11-19  6:50 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Linux Kernel, xfs

[-- Attachment #1: Type: text/plain, Size: 2121 bytes --]

On Mon, Nov 19, 2012 at 12:51 AM, Dave Chinner <david@fromorbit.com> wrote:
> On Sun, Nov 18, 2012 at 04:29:22PM +0100, Torsten Kaiser wrote:
>> On Sun, Nov 18, 2012 at 11:24 AM, Torsten Kaiser
>> <just.for.lkml@googlemail.com> wrote:
>> > On Tue, Oct 30, 2012 at 9:37 PM, Torsten Kaiser
>> > <just.for.lkml@googlemail.com> wrote:
>> >> I will keep LOCKDEP enabled on that system, and if there really is
>> >> another splat, I will report back here. But I rather doubt that this
>> >> will be needed.
>> >
>> > After the patch, I did not see this problem again, but today I found
>> > another LOCKDEP report that also looks XFS related.
>> > I found it twice in the logs, and as both were slightly different, I
>> > will attach both versions.
>>
>> > Nov  6 21:57:09 thoregon kernel: [ 9941.104353] 3.7.0-rc4 #1 Not tainted
>> > Nov  6 21:57:09 thoregon kernel: [ 9941.104355] inconsistent
>> > {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
>> > Nov  6 21:57:09 thoregon kernel: [ 9941.104430]        CPU0
>> > Nov  6 21:57:09 thoregon kernel: [ 9941.104431]        ----
>> > Nov  6 21:57:09 thoregon kernel: [ 9941.104432]   lock(&(&ip->i_lock)->mr_lock);
>> > Nov  6 21:57:09 thoregon kernel: [ 9941.104433]   <Interrupt>
>> > Nov  6 21:57:09 thoregon kernel: [ 9941.104434]
>> > lock(&(&ip->i_lock)->mr_lock);
>> > Nov  6 21:57:09 thoregon kernel: [ 9941.104435]
>> > Nov  6 21:57:09 thoregon kernel: [ 9941.104435]  *** DEADLOCK ***
>>
>> Sorry! Copied the wrong report. Your fix only landed in -rc5, so my
>> vanilla -rc4 did (also) report the old problem again.
>> And I copy&pasted that report instead of the second appearance of the
>> new problem.
>
> Can you repost it with line wrapping turned off? The output simply
> becomes unreadable when it wraps....
>
> Yeah, I know I can put it back together, but I've got better things
> to do with my time than stitch a couple of hundred lines of debug
> back into a readable format....

Sorry about that, but I can't find any option to turn that off in Gmail.

I have added the reports as attachment, I hope thats OK for you.

Thanks for looking into this.

Torsten

[-- Attachment #2: lockdep-reports.txt --]
[-- Type: text/plain, Size: 36869 bytes --]

[110926.972477] 
[110926.972482] =========================================================
[110926.972484] [ INFO: possible irq lock inversion dependency detected ]
[110926.972486] 3.7.0-rc4 #1 Not tainted
[110926.972487] ---------------------------------------------------------
[110926.972489] kswapd0/725 just changed the state of lock:
[110926.972490]  (sb_internal){.+.+.?}, at: [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
[110926.972499] but this lock took another, RECLAIM_FS-unsafe lock in the past:
[110926.972500]  (&(&ip->i_lock)->mr_lock/1){+.+.+.}
[110926.972500] 
[110926.972500] and interrupts could create inverse lock ordering between them.
[110926.972500] 
[110926.972503] 
[110926.972503] other info that might help us debug this:
[110926.972504]  Possible interrupt unsafe locking scenario:
[110926.972504] 
[110926.972505]        CPU0                    CPU1
[110926.972506]        ----                    ----
[110926.972507]   lock(&(&ip->i_lock)->mr_lock/1);
[110926.972509]                                local_irq_disable();
[110926.972509]                                lock(sb_internal);
[110926.972511]                                lock(&(&ip->i_lock)->mr_lock/1);
[110926.972512]   <Interrupt>
[110926.972513]     lock(sb_internal);
[110926.972514] 
[110926.972514]  *** DEADLOCK ***
[110926.972514] 
[110926.972516] 2 locks held by kswapd0/725:
[110926.972517]  #0:  (shrinker_rwsem){++++..}, at: [<ffffffff810bbd22>] shrink_slab+0x32/0x1f0
[110926.972522]  #1:  (&type->s_umount_key#20){++++.+}, at: [<ffffffff810f5a8e>] grab_super_passive+0x3e/0x90
[110926.972527] 
[110926.972527] the shortest dependencies between 2nd lock and 1st lock:
[110926.972533]  -> (&(&ip->i_lock)->mr_lock/1){+.+.+.} ops: 58117 {
[110926.972536]     HARDIRQ-ON-W at:
[110926.972537]                       [<ffffffff8107f091>] __lock_acquire+0x631/0x1c00
[110926.972540]                       [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972542]                       [<ffffffff8106126a>] down_write_nested+0x4a/0x70
[110926.972545]                       [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[110926.972548]                       [<ffffffff811f5194>] xfs_create+0x1d4/0x5a0
[110926.972550]                       [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972552]                       [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972554]                       [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972556]                       [<ffffffff81100b8e>] do_last.isra.69+0x80e/0xc80
[110926.972558]                       [<ffffffff811010ab>] path_openat.isra.70+0xab/0x490
[110926.972560]                       [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972562]                       [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972565]                       [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972567]                       [<ffffffff816b2d12>] system_call_fastpath+0x16/0x1b
[110926.972570]     SOFTIRQ-ON-W at:
[110926.972571]                       [<ffffffff8107f0c7>] __lock_acquire+0x667/0x1c00
[110926.972573]                       [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972574]                       [<ffffffff8106126a>] down_write_nested+0x4a/0x70
[110926.972576]                       [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[110926.972578]                       [<ffffffff811f5194>] xfs_create+0x1d4/0x5a0
[110926.972580]                       [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972581]                       [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972583]                       [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972585]                       [<ffffffff81100b8e>] do_last.isra.69+0x80e/0xc80
[110926.972587]                       [<ffffffff811010ab>] path_openat.isra.70+0xab/0x490
[110926.972589]                       [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972591]                       [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972593]                       [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972595]                       [<ffffffff816b2d12>] system_call_fastpath+0x16/0x1b
[110926.972597]     RECLAIM_FS-ON-W at:
[110926.972598]                          [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
[110926.972600]                          [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
[110926.972601]                          [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
[110926.972603]                          [<ffffffff810dba31>] vm_map_ram+0x271/0x770
[110926.972606]                          [<ffffffff811e1316>] _xfs_buf_map_pages+0x46/0xe0
[110926.972609]                          [<ffffffff811e222a>] xfs_buf_get_map+0x8a/0x130
[110926.972610]                          [<ffffffff81233ab9>] xfs_trans_get_buf_map+0xa9/0xd0
[110926.972613]                          [<ffffffff8121bced>] xfs_ialloc_inode_init+0xcd/0x1d0
[110926.972616]                          [<ffffffff8121c25e>] xfs_ialloc_ag_alloc+0x1be/0x560
[110926.972618]                          [<ffffffff8121da65>] xfs_dialloc+0x185/0x2a0
[110926.972619]                          [<ffffffff8121f198>] xfs_ialloc+0x58/0x650
[110926.972621]                          [<ffffffff811f3985>] xfs_dir_ialloc+0x65/0x270
[110926.972623]                          [<ffffffff811f536c>] xfs_create+0x3ac/0x5a0
[110926.972624]                          [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972626]                          [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972628]                          [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972630]                          [<ffffffff81100b8e>] do_last.isra.69+0x80e/0xc80
[110926.972632]                          [<ffffffff811010ab>] path_openat.isra.70+0xab/0x490
[110926.972634]                          [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972636]                          [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972638]                          [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972640]                          [<ffffffff816b2d12>] system_call_fastpath+0x16/0x1b
[110926.972642]     INITIAL USE at:
[110926.972642]                      [<ffffffff8107ed49>] __lock_acquire+0x2e9/0x1c00
[110926.972644]                      [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972646]                      [<ffffffff8106126a>] down_write_nested+0x4a/0x70
[110926.972648]                      [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[110926.972650]                      [<ffffffff811f5194>] xfs_create+0x1d4/0x5a0
[110926.972651]                      [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972653]                      [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972655]                      [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972657]                      [<ffffffff81100b8e>] do_last.isra.69+0x80e/0xc80
[110926.972659]                      [<ffffffff811010ab>] path_openat.isra.70+0xab/0x490
[110926.972661]                      [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972663]                      [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972664]                      [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972666]                      [<ffffffff816b2d12>] system_call_fastpath+0x16/0x1b
[110926.972668]   }
[110926.972669]   ... key      at: [<ffffffff825b4b81>] __key.41355+0x1/0x8
[110926.972672]   ... acquired at:
[110926.972672]    [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972674]    [<ffffffff8106126a>] down_write_nested+0x4a/0x70
[110926.972676]    [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[110926.972678]    [<ffffffff811f5194>] xfs_create+0x1d4/0x5a0
[110926.972679]    [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972681]    [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972683]    [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972684]    [<ffffffff81100b8e>] do_last.isra.69+0x80e/0xc80
[110926.972686]    [<ffffffff811010ab>] path_openat.isra.70+0xab/0x490
[110926.972688]    [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972690]    [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972692]    [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972694]    [<ffffffff816b2d12>] system_call_fastpath+0x16/0x1b
[110926.972696] 
[110926.972696] -> (sb_internal){.+.+.?} ops: 1710064 {
[110926.972699]    HARDIRQ-ON-R at:
[110926.972700]                     [<ffffffff8107ef6a>] __lock_acquire+0x50a/0x1c00
[110926.972702]                     [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972704]                     [<ffffffff810f452b>] __sb_start_write+0xab/0x190
[110926.972705]                     [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
[110926.972707]                     [<ffffffff811f5147>] xfs_create+0x187/0x5a0
[110926.972709]                     [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972711]                     [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972712]                     [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972714]                     [<ffffffff81100b8e>] do_last.isra.69+0x80e/0xc80
[110926.972716]                     [<ffffffff811010ab>] path_openat.isra.70+0xab/0x490
[110926.972718]                     [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972720]                     [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972722]                     [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972724]                     [<ffffffff816b2d12>] system_call_fastpath+0x16/0x1b
[110926.972726]    SOFTIRQ-ON-R at:
[110926.972727]                     [<ffffffff8107f0c7>] __lock_acquire+0x667/0x1c00
[110926.972728]                     [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972730]                     [<ffffffff810f452b>] __sb_start_write+0xab/0x190
[110926.972732]                     [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
[110926.972734]                     [<ffffffff811f5147>] xfs_create+0x187/0x5a0
[110926.972735]                     [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972737]                     [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972739]                     [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972741]                     [<ffffffff81100b8e>] do_last.isra.69+0x80e/0xc80
[110926.972743]                     [<ffffffff811010ab>] path_openat.isra.70+0xab/0x490
[110926.972745]                     [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972747]                     [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972749]                     [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972750]                     [<ffffffff816b2d12>] system_call_fastpath+0x16/0x1b
[110926.972752]    IN-RECLAIM_FS-R at:
[110926.972753]                        [<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
[110926.972755]                        [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972757]                        [<ffffffff810f452b>] __sb_start_write+0xab/0x190
[110926.972758]                        [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
[110926.972760]                        [<ffffffff811f3e54>] xfs_free_eofblocks+0x104/0x250
[110926.972762]                        [<ffffffff811f4b29>] xfs_inactive+0xa9/0x480
[110926.972763]                        [<ffffffff811efe00>] xfs_fs_evict_inode+0x70/0x80
[110926.972765]                        [<ffffffff8110cb8f>] evict+0xaf/0x1b0
[110926.972768]                        [<ffffffff8110d219>] dispose_list+0x39/0x50
[110926.972770]                        [<ffffffff8110dc73>] prune_icache_sb+0x183/0x340
[110926.972772]                        [<ffffffff810f5bd3>] prune_super+0xf3/0x1a0
[110926.972773]                        [<ffffffff810bbe0e>] shrink_slab+0x11e/0x1f0
[110926.972775]                        [<ffffffff810be400>] kswapd+0x690/0xa10
[110926.972777]                        [<ffffffff8105c246>] kthread+0xd6/0xe0
[110926.972779]                        [<ffffffff816b2c6c>] ret_from_fork+0x7c/0xb0
[110926.972781]    RECLAIM_FS-ON-R at:
[110926.972782]                        [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
[110926.972784]                        [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
[110926.972785]                        [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
[110926.972787]                        [<ffffffff811f6d3f>] kmem_zone_alloc+0x5f/0xe0
[110926.972789]                        [<ffffffff811f6dd8>] kmem_zone_zalloc+0x18/0x50
[110926.972790]                        [<ffffffff8122b1e2>] _xfs_trans_alloc+0x32/0x90
[110926.972792]                        [<ffffffff8122b278>] xfs_trans_alloc+0x38/0x50
[110926.972794]                        [<ffffffff811f5147>] xfs_create+0x187/0x5a0
[110926.972796]                        [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972797]                        [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972799]                        [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972801]                        [<ffffffff81100b8e>] do_last.isra.69+0x80e/0xc80
[110926.972803]                        [<ffffffff811010ab>] path_openat.isra.70+0xab/0x490
[110926.972805]                        [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972807]                        [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972809]                        [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972811]                        [<ffffffff816b2d12>] system_call_fastpath+0x16/0x1b
[110926.972813]    INITIAL USE at:
[110926.972814]                    [<ffffffff8107ed49>] __lock_acquire+0x2e9/0x1c00
[110926.972815]                    [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972817]                    [<ffffffff810f452b>] __sb_start_write+0xab/0x190
[110926.972819]                    [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
[110926.972820]                    [<ffffffff811f5147>] xfs_create+0x187/0x5a0
[110926.972822]                    [<ffffffff811eca1a>] xfs_vn_mknod+0x8a/0x1b0
[110926.972824]                    [<ffffffff811ecb6e>] xfs_vn_create+0xe/0x10
[110926.972826]                    [<ffffffff81100332>] vfs_create+0x72/0xc0
[110926.972827]                    [<ffffffff81100b8e>] do_last.isra.69+0x80e/0xc80
[110926.972829]                    [<ffffffff811010ab>] path_openat.isra.70+0xab/0x490
[110926.972831]                    [<ffffffff8110184d>] do_filp_open+0x3d/0xa0
[110926.972833]                    [<ffffffff810f2139>] do_sys_open+0xf9/0x1e0
[110926.972835]                    [<ffffffff810f223c>] sys_open+0x1c/0x20
[110926.972837]                    [<ffffffff816b2d12>] system_call_fastpath+0x16/0x1b
[110926.972839]  }
[110926.972840]  ... key      at: [<ffffffff81c34e40>] xfs_fs_type+0x60/0x80
[110926.972842]  ... acquired at:
[110926.972843]    [<ffffffff8107df3b>] check_usage_forwards+0x10b/0x140
[110926.972845]    [<ffffffff8107e900>] mark_lock+0x190/0x2f0
[110926.972846]    [<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
[110926.972848]    [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972850]    [<ffffffff810f452b>] __sb_start_write+0xab/0x190
[110926.972851]    [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
[110926.972853]    [<ffffffff811f3e54>] xfs_free_eofblocks+0x104/0x250
[110926.972855]    [<ffffffff811f4b29>] xfs_inactive+0xa9/0x480
[110926.972856]    [<ffffffff811efe00>] xfs_fs_evict_inode+0x70/0x80
[110926.972858]    [<ffffffff8110cb8f>] evict+0xaf/0x1b0
[110926.972860]    [<ffffffff8110d219>] dispose_list+0x39/0x50
[110926.972861]    [<ffffffff8110dc73>] prune_icache_sb+0x183/0x340
[110926.972863]    [<ffffffff810f5bd3>] prune_super+0xf3/0x1a0
[110926.972865]    [<ffffffff810bbe0e>] shrink_slab+0x11e/0x1f0
[110926.972866]    [<ffffffff810be400>] kswapd+0x690/0xa10
[110926.972868]    [<ffffffff8105c246>] kthread+0xd6/0xe0
[110926.972870]    [<ffffffff816b2c6c>] ret_from_fork+0x7c/0xb0
[110926.972871] 
[110926.972872] 
[110926.972872] stack backtrace:
[110926.972874] Pid: 725, comm: kswapd0 Not tainted 3.7.0-rc4 #1
[110926.972875] Call Trace:
[110926.972878]  [<ffffffff8107de28>] print_irq_inversion_bug.part.37+0x1e8/0x1f0
[110926.972880]  [<ffffffff8107df3b>] check_usage_forwards+0x10b/0x140
[110926.972883]  [<ffffffff8107e900>] mark_lock+0x190/0x2f0
[110926.972885]  [<ffffffff8107de30>] ? print_irq_inversion_bug.part.37+0x1f0/0x1f0
[110926.972887]  [<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
[110926.972889]  [<ffffffff81220424>] ? xfs_iext_bno_to_ext+0x84/0x160
[110926.972892]  [<ffffffff8120a0e3>] ? xfs_bmbt_get_all+0x13/0x20
[110926.972895]  [<ffffffff81201104>] ? xfs_bmap_search_multi_extents+0xa4/0x110
[110926.972897]  [<ffffffff81080b55>] lock_acquire+0x55/0x70
[110926.972899]  [<ffffffff8122b268>] ? xfs_trans_alloc+0x28/0x50
[110926.972901]  [<ffffffff810f452b>] __sb_start_write+0xab/0x190
[110926.972903]  [<ffffffff8122b268>] ? xfs_trans_alloc+0x28/0x50
[110926.972905]  [<ffffffff8122b268>] ? xfs_trans_alloc+0x28/0x50
[110926.972907]  [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
[110926.972908]  [<ffffffff811f3e54>] xfs_free_eofblocks+0x104/0x250
[110926.972910]  [<ffffffff816b1efb>] ? _raw_spin_unlock_irq+0x2b/0x50
[110926.972912]  [<ffffffff811f4b29>] xfs_inactive+0xa9/0x480
[110926.972914]  [<ffffffff816b1efb>] ? _raw_spin_unlock_irq+0x2b/0x50
[110926.972916]  [<ffffffff811efe00>] xfs_fs_evict_inode+0x70/0x80
[110926.972918]  [<ffffffff8110cb8f>] evict+0xaf/0x1b0
[110926.972920]  [<ffffffff8110d219>] dispose_list+0x39/0x50
[110926.972922]  [<ffffffff8110dc73>] prune_icache_sb+0x183/0x340
[110926.972924]  [<ffffffff810f5bd3>] prune_super+0xf3/0x1a0
[110926.972926]  [<ffffffff810bbe0e>] shrink_slab+0x11e/0x1f0
[110926.972928]  [<ffffffff810be400>] kswapd+0x690/0xa10
[110926.972930]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
[110926.972932]  [<ffffffff810bdd70>] ? shrink_lruvec+0x540/0x540
[110926.972934]  [<ffffffff8105c246>] kthread+0xd6/0xe0
[110926.972936]  [<ffffffff816b1efb>] ? _raw_spin_unlock_irq+0x2b/0x50
[110926.972938]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
[110926.972940]  [<ffffffff816b2c6c>] ret_from_fork+0x7c/0xb0
[110926.972942]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0

[66571.610863] 
[66571.610869] =========================================================
[66571.610870] [ INFO: possible irq lock inversion dependency detected ]
[66571.610873] 3.7.0-rc5 #1 Not tainted
[66571.610874] ---------------------------------------------------------
[66571.610875] cc1/21330 just changed the state of lock:
[66571.610877]  (sb_internal){.+.+.?}, at: [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
[66571.610885] but this lock took another, RECLAIM_FS-unsafe lock in the past:
[66571.610886]  (&(&ip->i_lock)->mr_lock/1){+.+.+.}
[66571.610886] 
[66571.610886] and interrupts could create inverse lock ordering between them.
[66571.610886] 
[66571.610890] 
[66571.610890] other info that might help us debug this:
[66571.610891]  Possible interrupt unsafe locking scenario:
[66571.610891] 
[66571.610892]        CPU0                    CPU1
[66571.610893]        ----                    ----
[66571.610894]   lock(&(&ip->i_lock)->mr_lock/1);
[66571.610896]                                local_irq_disable();
[66571.610897]                                lock(sb_internal);
[66571.610898]                                lock(&(&ip->i_lock)->mr_lock/1);
[66571.610900]   <Interrupt>
[66571.610901]     lock(sb_internal);
[66571.610902] 
[66571.610902]  *** DEADLOCK ***
[66571.610902] 
[66571.610904] 3 locks held by cc1/21330:
[66571.610905]  #0:  (&mm->mmap_sem){++++++}, at: [<ffffffff81029d8b>] __do_page_fault+0xfb/0x480
[66571.610910]  #1:  (shrinker_rwsem){++++..}, at: [<ffffffff810bbd02>] shrink_slab+0x32/0x1f0
[66571.610915]  #2:  (&type->s_umount_key#20){++++.+}, at: [<ffffffff810f5a7e>] grab_super_passive+0x3e/0x90
[66571.610921] 
[66571.610921] the shortest dependencies between 2nd lock and 1st lock:
[66571.610927]  -> (&(&ip->i_lock)->mr_lock/1){+.+.+.} ops: 169649 {
[66571.610931]     HARDIRQ-ON-W at:
[66571.610932]                       [<ffffffff8107f091>] __lock_acquire+0x631/0x1c00
[66571.610935]                       [<ffffffff81080b55>] lock_acquire+0x55/0x70
[66571.610937]                       [<ffffffff8106126a>] down_write_nested+0x4a/0x70
[66571.610941]                       [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[66571.610944]                       [<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
[66571.610946]                       [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
[66571.610948]                       [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
[66571.610950]                       [<ffffffff81100322>] vfs_create+0x72/0xc0
[66571.610953]                       [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
[66571.610955]                       [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
[66571.610957]                       [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
[66571.610959]                       [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
[66571.610962]                       [<ffffffff810f222c>] sys_open+0x1c/0x20
[66571.610964]                       [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
[66571.610967]     SOFTIRQ-ON-W at:
[66571.610968]                       [<ffffffff8107f0c7>] __lock_acquire+0x667/0x1c00
[66571.610970]                       [<ffffffff81080b55>] lock_acquire+0x55/0x70
[66571.610972]                       [<ffffffff8106126a>] down_write_nested+0x4a/0x70
[66571.610974]                       [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[66571.610976]                       [<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
[66571.610977]                       [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
[66571.610979]                       [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
[66571.610981]                       [<ffffffff81100322>] vfs_create+0x72/0xc0
[66571.610983]                       [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
[66571.610985]                       [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
[66571.610987]                       [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
[66571.610989]                       [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
[66571.610991]                       [<ffffffff810f222c>] sys_open+0x1c/0x20
[66571.610993]                       [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
[66571.610995]     RECLAIM_FS-ON-W at:
[66571.610996]                          [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
[66571.610998]                          [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
[66571.610999]                          [<ffffffff810e9dc5>] kmem_cache_alloc+0x35/0xe0
[66571.611002]                          [<ffffffff810dba21>] vm_map_ram+0x271/0x770
[66571.611004]                          [<ffffffff811e12b6>] _xfs_buf_map_pages+0x46/0xe0
[66571.611008]                          [<ffffffff811e21ca>] xfs_buf_get_map+0x8a/0x130
[66571.611009]                          [<ffffffff81233989>] xfs_trans_get_buf_map+0xa9/0xd0
[66571.611011]                          [<ffffffff8121bc2d>] xfs_ialloc_inode_init+0xcd/0x1d0
[66571.611015]                          [<ffffffff8121c16f>] xfs_ialloc_ag_alloc+0x18f/0x500
[66571.611017]                          [<ffffffff8121d955>] xfs_dialloc+0x185/0x2a0
[66571.611019]                          [<ffffffff8121f068>] xfs_ialloc+0x58/0x650
[66571.611021]                          [<ffffffff811f3995>] xfs_dir_ialloc+0x65/0x270
[66571.611023]                          [<ffffffff811f537c>] xfs_create+0x3ac/0x5a0
[66571.611024]                          [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
[66571.611026]                          [<ffffffff811ecb61>] xfs_vn_mkdir+0x11/0x20
[66571.611028]                          [<ffffffff8110016f>] vfs_mkdir+0x7f/0xd0
[66571.611030]                          [<ffffffff81101b83>] sys_mkdirat+0x43/0x80
[66571.611032]                          [<ffffffff81101bd4>] sys_mkdir+0x14/0x20
[66571.611034]                          [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
[66571.611036]     INITIAL USE at:
[66571.611037]                      [<ffffffff8107ed49>] __lock_acquire+0x2e9/0x1c00
[66571.611038]                      [<ffffffff81080b55>] lock_acquire+0x55/0x70
[66571.611040]                      [<ffffffff8106126a>] down_write_nested+0x4a/0x70
[66571.611042]                      [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[66571.611044]                      [<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
[66571.611046]                      [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
[66571.611047]                      [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
[66571.611049]                      [<ffffffff81100322>] vfs_create+0x72/0xc0
[66571.611051]                      [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
[66571.611053]                      [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
[66571.611055]                      [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
[66571.611057]                      [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
[66571.611059]                      [<ffffffff810f222c>] sys_open+0x1c/0x20
[66571.611061]                      [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
[66571.611063]   }
[66571.611064]   ... key      at: [<ffffffff825b4b81>] __key.41357+0x1/0x8
[66571.611066]   ... acquired at:
[66571.611067]    [<ffffffff81080b55>] lock_acquire+0x55/0x70
[66571.611069]    [<ffffffff8106126a>] down_write_nested+0x4a/0x70
[66571.611071]    [<ffffffff811e8164>] xfs_ilock+0x84/0xb0
[66571.611073]    [<ffffffff811f51a4>] xfs_create+0x1d4/0x5a0
[66571.611074]    [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
[66571.611076]    [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
[66571.611078]    [<ffffffff81100322>] vfs_create+0x72/0xc0
[66571.611080]    [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
[66571.611082]    [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
[66571.611084]    [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
[66571.611086]    [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
[66571.611088]    [<ffffffff810f222c>] sys_open+0x1c/0x20
[66571.611090]    [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
[66571.611091] 
[66571.611092] -> (sb_internal){.+.+.?} ops: 1341531 {
[66571.611095]    HARDIRQ-ON-R at:
[66571.611096]                     [<ffffffff8107ef6a>] __lock_acquire+0x50a/0x1c00
[66571.611098]                     [<ffffffff81080b55>] lock_acquire+0x55/0x70
[66571.611100]                     [<ffffffff810f451b>] __sb_start_write+0xab/0x190
[66571.611102]                     [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
[66571.611104]                     [<ffffffff811f5157>] xfs_create+0x187/0x5a0
[66571.611105]                     [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
[66571.611107]                     [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
[66571.611109]                     [<ffffffff81100322>] vfs_create+0x72/0xc0
[66571.611111]                     [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
[66571.611113]                     [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
[66571.611115]                     [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
[66571.611117]                     [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
[66571.611119]                     [<ffffffff810f222c>] sys_open+0x1c/0x20
[66571.611121]                     [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
[66571.611123]    SOFTIRQ-ON-R at:
[66571.611124]                     [<ffffffff8107f0c7>] __lock_acquire+0x667/0x1c00
[66571.611126]                     [<ffffffff81080b55>] lock_acquire+0x55/0x70
[66571.611128]                     [<ffffffff810f451b>] __sb_start_write+0xab/0x190
[66571.611130]                     [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
[66571.611132]                     [<ffffffff811f5157>] xfs_create+0x187/0x5a0
[66571.611133]                     [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
[66571.611135]                     [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
[66571.611137]                     [<ffffffff81100322>] vfs_create+0x72/0xc0
[66571.611139]                     [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
[66571.611141]                     [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
[66571.611143]                     [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
[66571.611145]                     [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
[66571.611147]                     [<ffffffff810f222c>] sys_open+0x1c/0x20
[66571.611149]                     [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
[66571.611151]    IN-RECLAIM_FS-R at:
[66571.611152]                        [<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
[66571.611154]                        [<ffffffff81080b55>] lock_acquire+0x55/0x70
[66571.611156]                        [<ffffffff810f451b>] __sb_start_write+0xab/0x190
[66571.611158]                        [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
[66571.611159]                        [<ffffffff811f3e64>] xfs_free_eofblocks+0x104/0x250
[66571.611161]                        [<ffffffff811f4b39>] xfs_inactive+0xa9/0x480
[66571.611163]                        [<ffffffff811efe10>] xfs_fs_evict_inode+0x70/0x80
[66571.611165]                        [<ffffffff8110cb7f>] evict+0xaf/0x1b0
[66571.611168]                        [<ffffffff8110d209>] dispose_list+0x39/0x50
[66571.611170]                        [<ffffffff8110dc63>] prune_icache_sb+0x183/0x340
[66571.611172]                        [<ffffffff810f5bc3>] prune_super+0xf3/0x1a0
[66571.611173]                        [<ffffffff810bbdee>] shrink_slab+0x11e/0x1f0
[66571.611175]                        [<ffffffff810be98f>] try_to_free_pages+0x21f/0x4e0
[66571.611177]                        [<ffffffff810b5ec6>] __alloc_pages_nodemask+0x506/0x800
[66571.611179]                        [<ffffffff810ce56e>] handle_pte_fault+0x5ae/0x7a0
[66571.611182]                        [<ffffffff810cf769>] handle_mm_fault+0x1f9/0x2a0
[66571.611184]                        [<ffffffff81029dfc>] __do_page_fault+0x16c/0x480
[66571.611186]                        [<ffffffff8102a139>] do_page_fault+0x9/0x10
[66571.611188]                        [<ffffffff816b287f>] page_fault+0x1f/0x30
[66571.611190]    RECLAIM_FS-ON-R at:
[66571.611191]                        [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
[66571.611193]                        [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
[66571.611195]                        [<ffffffff810e9dc5>] kmem_cache_alloc+0x35/0xe0
[66571.611197]                        [<ffffffff811f6d4f>] kmem_zone_alloc+0x5f/0xe0
[66571.611198]                        [<ffffffff811f6de8>] kmem_zone_zalloc+0x18/0x50
[66571.611200]                        [<ffffffff8122b0b2>] _xfs_trans_alloc+0x32/0x90
[66571.611202]                        [<ffffffff8122b148>] xfs_trans_alloc+0x38/0x50
[66571.611204]                        [<ffffffff811f5157>] xfs_create+0x187/0x5a0
[66571.611205]                        [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
[66571.611207]                        [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
[66571.611209]                        [<ffffffff81100322>] vfs_create+0x72/0xc0
[66571.611211]                        [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
[66571.611213]                        [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
[66571.611215]                        [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
[66571.611217]                        [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
[66571.611219]                        [<ffffffff810f222c>] sys_open+0x1c/0x20
[66571.611221]                        [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
[66571.611223]    INITIAL USE at:
[66571.611224]                    [<ffffffff8107ed49>] __lock_acquire+0x2e9/0x1c00
[66571.611225]                    [<ffffffff81080b55>] lock_acquire+0x55/0x70
[66571.611227]                    [<ffffffff810f451b>] __sb_start_write+0xab/0x190
[66571.611229]                    [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
[66571.611231]                    [<ffffffff811f5157>] xfs_create+0x187/0x5a0
[66571.611232]                    [<ffffffff811eca2a>] xfs_vn_mknod+0x8a/0x1b0
[66571.611234]                    [<ffffffff811ecb7e>] xfs_vn_create+0xe/0x10
[66571.611236]                    [<ffffffff81100322>] vfs_create+0x72/0xc0
[66571.611238]                    [<ffffffff81100b7e>] do_last.isra.69+0x80e/0xc80
[66571.611240]                    [<ffffffff8110109b>] path_openat.isra.70+0xab/0x490
[66571.611242]                    [<ffffffff8110183d>] do_filp_open+0x3d/0xa0
[66571.611244]                    [<ffffffff810f2129>] do_sys_open+0xf9/0x1e0
[66571.611246]                    [<ffffffff810f222c>] sys_open+0x1c/0x20
[66571.611248]                    [<ffffffff816b2e52>] system_call_fastpath+0x16/0x1b
[66571.611250]  }
[66571.611251]  ... key      at: [<ffffffff81c34e40>] xfs_fs_type+0x60/0x80
[66571.611254]  ... acquired at:
[66571.611254]    [<ffffffff8107df3b>] check_usage_forwards+0x10b/0x140
[66571.611256]    [<ffffffff8107e900>] mark_lock+0x190/0x2f0
[66571.611258]    [<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
[66571.611260]    [<ffffffff81080b55>] lock_acquire+0x55/0x70
[66571.611261]    [<ffffffff810f451b>] __sb_start_write+0xab/0x190
[66571.611263]    [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
[66571.611265]    [<ffffffff811f3e64>] xfs_free_eofblocks+0x104/0x250
[66571.611266]    [<ffffffff811f4b39>] xfs_inactive+0xa9/0x480
[66571.611268]    [<ffffffff811efe10>] xfs_fs_evict_inode+0x70/0x80
[66571.611270]    [<ffffffff8110cb7f>] evict+0xaf/0x1b0
[66571.611271]    [<ffffffff8110d209>] dispose_list+0x39/0x50
[66571.611273]    [<ffffffff8110dc63>] prune_icache_sb+0x183/0x340
[66571.611275]    [<ffffffff810f5bc3>] prune_super+0xf3/0x1a0
[66571.611277]    [<ffffffff810bbdee>] shrink_slab+0x11e/0x1f0
[66571.611278]    [<ffffffff810be98f>] try_to_free_pages+0x21f/0x4e0
[66571.611280]    [<ffffffff810b5ec6>] __alloc_pages_nodemask+0x506/0x800
[66571.611282]    [<ffffffff810ce56e>] handle_pte_fault+0x5ae/0x7a0
[66571.611284]    [<ffffffff810cf769>] handle_mm_fault+0x1f9/0x2a0
[66571.611286]    [<ffffffff81029dfc>] __do_page_fault+0x16c/0x480
[66571.611287]    [<ffffffff8102a139>] do_page_fault+0x9/0x10
[66571.611289]    [<ffffffff816b287f>] page_fault+0x1f/0x30
[66571.611291] 
[66571.611292] 
[66571.611292] stack backtrace:
[66571.611294] Pid: 21330, comm: cc1 Not tainted 3.7.0-rc5 #1
[66571.611295] Call Trace:
[66571.611298]  [<ffffffff8107de28>] print_irq_inversion_bug.part.37+0x1e8/0x1f0
[66571.611300]  [<ffffffff8107df3b>] check_usage_forwards+0x10b/0x140
[66571.611303]  [<ffffffff8107e900>] mark_lock+0x190/0x2f0
[66571.611306]  [<ffffffff8150406e>] ? dm_request+0x2e/0x2a0
[66571.611308]  [<ffffffff8107de30>] ? print_irq_inversion_bug.part.37+0x1f0/0x1f0
[66571.611310]  [<ffffffff8107efdf>] __lock_acquire+0x57f/0x1c00
[66571.611313]  [<ffffffff812202f4>] ? xfs_iext_bno_to_ext+0x84/0x160
[66571.611316]  [<ffffffff8120a023>] ? xfs_bmbt_get_all+0x13/0x20
[66571.611318]  [<ffffffff81200fb4>] ? xfs_bmap_search_multi_extents+0xa4/0x110
[66571.611320]  [<ffffffff81080b55>] lock_acquire+0x55/0x70
[66571.611322]  [<ffffffff8122b138>] ? xfs_trans_alloc+0x28/0x50
[66571.611324]  [<ffffffff810f451b>] __sb_start_write+0xab/0x190
[66571.611326]  [<ffffffff8122b138>] ? xfs_trans_alloc+0x28/0x50
[66571.611328]  [<ffffffff8122b138>] ? xfs_trans_alloc+0x28/0x50
[66571.611330]  [<ffffffff8122b138>] xfs_trans_alloc+0x28/0x50
[66571.611332]  [<ffffffff811f3e64>] xfs_free_eofblocks+0x104/0x250
[66571.611334]  [<ffffffff816b204b>] ? _raw_spin_unlock_irq+0x2b/0x50
[66571.611336]  [<ffffffff811f4b39>] xfs_inactive+0xa9/0x480
[66571.611337]  [<ffffffff816b204b>] ? _raw_spin_unlock_irq+0x2b/0x50
[66571.611340]  [<ffffffff811efe10>] xfs_fs_evict_inode+0x70/0x80
[66571.611342]  [<ffffffff8110cb7f>] evict+0xaf/0x1b0
[66571.611344]  [<ffffffff8110d209>] dispose_list+0x39/0x50
[66571.611346]  [<ffffffff8110dc63>] prune_icache_sb+0x183/0x340
[66571.611347]  [<ffffffff810f5bc3>] prune_super+0xf3/0x1a0
[66571.611349]  [<ffffffff810bbdee>] shrink_slab+0x11e/0x1f0
[66571.611352]  [<ffffffff810be98f>] try_to_free_pages+0x21f/0x4e0
[66571.611354]  [<ffffffff810b5ec6>] __alloc_pages_nodemask+0x506/0x800
[66571.611356]  [<ffffffff810b9e40>] ? lru_deactivate_fn+0x1c0/0x1c0
[66571.611358]  [<ffffffff810ce56e>] handle_pte_fault+0x5ae/0x7a0
[66571.611360]  [<ffffffff810cf769>] handle_mm_fault+0x1f9/0x2a0
[66571.611363]  [<ffffffff81029dfc>] __do_page_fault+0x16c/0x480
[66571.611366]  [<ffffffff8129c7ad>] ? trace_hardirqs_off_thunk+0x3a/0x3c
[66571.611368]  [<ffffffff8102a139>] do_page_fault+0x9/0x10
[66571.611370]  [<ffffffff816b287f>] page_fault+0x1f/0x30

[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
  2012-11-19  6:50             ` Torsten Kaiser
@ 2012-11-19 23:53               ` Dave Chinner
  -1 siblings, 0 replies; 31+ messages in thread
From: Dave Chinner @ 2012-11-19 23:53 UTC (permalink / raw)
  To: Torsten Kaiser; +Cc: xfs, Linux Kernel

On Mon, Nov 19, 2012 at 07:50:06AM +0100, Torsten Kaiser wrote:
> On Mon, Nov 19, 2012 at 12:51 AM, Dave Chinner <david@fromorbit.com> wrote:
> > On Sun, Nov 18, 2012 at 04:29:22PM +0100, Torsten Kaiser wrote:
> >> On Sun, Nov 18, 2012 at 11:24 AM, Torsten Kaiser
> >> <just.for.lkml@googlemail.com> wrote:
> >> > On Tue, Oct 30, 2012 at 9:37 PM, Torsten Kaiser
> >> > <just.for.lkml@googlemail.com> wrote:
> >> >> I will keep LOCKDEP enabled on that system, and if there really is
> >> >> another splat, I will report back here. But I rather doubt that this
> >> >> will be needed.
> >> >
> >> > After the patch, I did not see this problem again, but today I found
> >> > another LOCKDEP report that also looks XFS related.
> >> > I found it twice in the logs, and as both were slightly different, I
> >> > will attach both versions.
> >>
> >> > Nov  6 21:57:09 thoregon kernel: [ 9941.104353] 3.7.0-rc4 #1 Not tainted
> >> > Nov  6 21:57:09 thoregon kernel: [ 9941.104355] inconsistent
> >> > {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
> >> > Nov  6 21:57:09 thoregon kernel: [ 9941.104430]        CPU0
> >> > Nov  6 21:57:09 thoregon kernel: [ 9941.104431]        ----
> >> > Nov  6 21:57:09 thoregon kernel: [ 9941.104432]   lock(&(&ip->i_lock)->mr_lock);
> >> > Nov  6 21:57:09 thoregon kernel: [ 9941.104433]   <Interrupt>
> >> > Nov  6 21:57:09 thoregon kernel: [ 9941.104434]
> >> > lock(&(&ip->i_lock)->mr_lock);
> >> > Nov  6 21:57:09 thoregon kernel: [ 9941.104435]
> >> > Nov  6 21:57:09 thoregon kernel: [ 9941.104435]  *** DEADLOCK ***
> >>
> >> Sorry! Copied the wrong report. Your fix only landed in -rc5, so my
> >> vanilla -rc4 did (also) report the old problem again.
> >> And I copy&pasted that report instead of the second appearance of the
> >> new problem.
> >
> > Can you repost it with line wrapping turned off? The output simply
> > becomes unreadable when it wraps....
> >
> > Yeah, I know I can put it back together, but I've got better things
> > to do with my time than stitch a couple of hundred lines of debug
> > back into a readable format....
> 
> Sorry about that, but I can't find any option to turn that off in Gmail.

Seems like you can't, as per Documentation/email-clients.txt

> I have added the reports as attachment, I hope thats OK for you.

Encoded as text, so it does.

So, both lockdep thingy's are the same:

> [110926.972482] =========================================================
> [110926.972484] [ INFO: possible irq lock inversion dependency detected ]
> [110926.972486] 3.7.0-rc4 #1 Not tainted
> [110926.972487] ---------------------------------------------------------
> [110926.972489] kswapd0/725 just changed the state of lock:
> [110926.972490]  (sb_internal){.+.+.?}, at: [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
> [110926.972499] but this lock took another, RECLAIM_FS-unsafe lock in the past:
> [110926.972500]  (&(&ip->i_lock)->mr_lock/1){+.+.+.}

Ah, what? Since when has the ilock been reclaim unsafe?

> [110926.972500] and interrupts could create inverse lock ordering between them.
> [110926.972500] 
> [110926.972503] 
> [110926.972503] other info that might help us debug this:
> [110926.972504]  Possible interrupt unsafe locking scenario:
> [110926.972504] 
> [110926.972505]        CPU0                    CPU1
> [110926.972506]        ----                    ----
> [110926.972507]   lock(&(&ip->i_lock)->mr_lock/1);
> [110926.972509]                                local_irq_disable();
> [110926.972509]                                lock(sb_internal);
> [110926.972511]                                lock(&(&ip->i_lock)->mr_lock/1);
> [110926.972512]   <Interrupt>
> [110926.972513]     lock(sb_internal);

Um, that's just bizzare. No XFS code runs with interrupts disabled,
so I cannot see how this possible.

.....


       [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
       [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
       [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
       [<ffffffff810dba31>] vm_map_ram+0x271/0x770
       [<ffffffff811e1316>] _xfs_buf_map_pages+0x46/0xe0
       [<ffffffff811e222a>] xfs_buf_get_map+0x8a/0x130
       [<ffffffff81233ab9>] xfs_trans_get_buf_map+0xa9/0xd0
       [<ffffffff8121bced>] xfs_ialloc_inode_init+0xcd/0x1d0

We shouldn't be mapping buffers there, there's a patch below to fix
this. It's probably the source of this report, even though I cannot
lockdep seems to be off with the fairies...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

xfs: inode allocation should use unmapped buffers.

From: Dave Chinner <dchinner@redhat.com>

Inode buffers do not need to be mapped as inodes are read or written
directly from/to the pages underlying the buffer. This fixes a
regression introduced by commit 611c994 ("xfs: make XBF_MAPPED the
default behaviour").

Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 fs/xfs/xfs_ialloc.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/fs/xfs/xfs_ialloc.c b/fs/xfs/xfs_ialloc.c
index 2d6495e..a815412 100644
--- a/fs/xfs/xfs_ialloc.c
+++ b/fs/xfs/xfs_ialloc.c
@@ -200,7 +200,8 @@ xfs_ialloc_inode_init(
 		 */
 		d = XFS_AGB_TO_DADDR(mp, agno, agbno + (j * blks_per_cluster));
 		fbuf = xfs_trans_get_buf(tp, mp->m_ddev_targp, d,
-					 mp->m_bsize * blks_per_cluster, 0);
+					 mp->m_bsize * blks_per_cluster,
+					 XBF_UNMAPPED);
 		if (!fbuf)
 			return ENOMEM;
 		/*

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
@ 2012-11-19 23:53               ` Dave Chinner
  0 siblings, 0 replies; 31+ messages in thread
From: Dave Chinner @ 2012-11-19 23:53 UTC (permalink / raw)
  To: Torsten Kaiser; +Cc: Linux Kernel, xfs

On Mon, Nov 19, 2012 at 07:50:06AM +0100, Torsten Kaiser wrote:
> On Mon, Nov 19, 2012 at 12:51 AM, Dave Chinner <david@fromorbit.com> wrote:
> > On Sun, Nov 18, 2012 at 04:29:22PM +0100, Torsten Kaiser wrote:
> >> On Sun, Nov 18, 2012 at 11:24 AM, Torsten Kaiser
> >> <just.for.lkml@googlemail.com> wrote:
> >> > On Tue, Oct 30, 2012 at 9:37 PM, Torsten Kaiser
> >> > <just.for.lkml@googlemail.com> wrote:
> >> >> I will keep LOCKDEP enabled on that system, and if there really is
> >> >> another splat, I will report back here. But I rather doubt that this
> >> >> will be needed.
> >> >
> >> > After the patch, I did not see this problem again, but today I found
> >> > another LOCKDEP report that also looks XFS related.
> >> > I found it twice in the logs, and as both were slightly different, I
> >> > will attach both versions.
> >>
> >> > Nov  6 21:57:09 thoregon kernel: [ 9941.104353] 3.7.0-rc4 #1 Not tainted
> >> > Nov  6 21:57:09 thoregon kernel: [ 9941.104355] inconsistent
> >> > {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
> >> > Nov  6 21:57:09 thoregon kernel: [ 9941.104430]        CPU0
> >> > Nov  6 21:57:09 thoregon kernel: [ 9941.104431]        ----
> >> > Nov  6 21:57:09 thoregon kernel: [ 9941.104432]   lock(&(&ip->i_lock)->mr_lock);
> >> > Nov  6 21:57:09 thoregon kernel: [ 9941.104433]   <Interrupt>
> >> > Nov  6 21:57:09 thoregon kernel: [ 9941.104434]
> >> > lock(&(&ip->i_lock)->mr_lock);
> >> > Nov  6 21:57:09 thoregon kernel: [ 9941.104435]
> >> > Nov  6 21:57:09 thoregon kernel: [ 9941.104435]  *** DEADLOCK ***
> >>
> >> Sorry! Copied the wrong report. Your fix only landed in -rc5, so my
> >> vanilla -rc4 did (also) report the old problem again.
> >> And I copy&pasted that report instead of the second appearance of the
> >> new problem.
> >
> > Can you repost it with line wrapping turned off? The output simply
> > becomes unreadable when it wraps....
> >
> > Yeah, I know I can put it back together, but I've got better things
> > to do with my time than stitch a couple of hundred lines of debug
> > back into a readable format....
> 
> Sorry about that, but I can't find any option to turn that off in Gmail.

Seems like you can't, as per Documentation/email-clients.txt

> I have added the reports as attachment, I hope thats OK for you.

Encoded as text, so it does.

So, both lockdep thingy's are the same:

> [110926.972482] =========================================================
> [110926.972484] [ INFO: possible irq lock inversion dependency detected ]
> [110926.972486] 3.7.0-rc4 #1 Not tainted
> [110926.972487] ---------------------------------------------------------
> [110926.972489] kswapd0/725 just changed the state of lock:
> [110926.972490]  (sb_internal){.+.+.?}, at: [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
> [110926.972499] but this lock took another, RECLAIM_FS-unsafe lock in the past:
> [110926.972500]  (&(&ip->i_lock)->mr_lock/1){+.+.+.}

Ah, what? Since when has the ilock been reclaim unsafe?

> [110926.972500] and interrupts could create inverse lock ordering between them.
> [110926.972500] 
> [110926.972503] 
> [110926.972503] other info that might help us debug this:
> [110926.972504]  Possible interrupt unsafe locking scenario:
> [110926.972504] 
> [110926.972505]        CPU0                    CPU1
> [110926.972506]        ----                    ----
> [110926.972507]   lock(&(&ip->i_lock)->mr_lock/1);
> [110926.972509]                                local_irq_disable();
> [110926.972509]                                lock(sb_internal);
> [110926.972511]                                lock(&(&ip->i_lock)->mr_lock/1);
> [110926.972512]   <Interrupt>
> [110926.972513]     lock(sb_internal);

Um, that's just bizzare. No XFS code runs with interrupts disabled,
so I cannot see how this possible.

.....


       [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
       [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
       [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
       [<ffffffff810dba31>] vm_map_ram+0x271/0x770
       [<ffffffff811e1316>] _xfs_buf_map_pages+0x46/0xe0
       [<ffffffff811e222a>] xfs_buf_get_map+0x8a/0x130
       [<ffffffff81233ab9>] xfs_trans_get_buf_map+0xa9/0xd0
       [<ffffffff8121bced>] xfs_ialloc_inode_init+0xcd/0x1d0

We shouldn't be mapping buffers there, there's a patch below to fix
this. It's probably the source of this report, even though I cannot
lockdep seems to be off with the fairies...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

xfs: inode allocation should use unmapped buffers.

From: Dave Chinner <dchinner@redhat.com>

Inode buffers do not need to be mapped as inodes are read or written
directly from/to the pages underlying the buffer. This fixes a
regression introduced by commit 611c994 ("xfs: make XBF_MAPPED the
default behaviour").

Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 fs/xfs/xfs_ialloc.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/fs/xfs/xfs_ialloc.c b/fs/xfs/xfs_ialloc.c
index 2d6495e..a815412 100644
--- a/fs/xfs/xfs_ialloc.c
+++ b/fs/xfs/xfs_ialloc.c
@@ -200,7 +200,8 @@ xfs_ialloc_inode_init(
 		 */
 		d = XFS_AGB_TO_DADDR(mp, agno, agbno + (j * blks_per_cluster));
 		fbuf = xfs_trans_get_buf(tp, mp->m_ddev_targp, d,
-					 mp->m_bsize * blks_per_cluster, 0);
+					 mp->m_bsize * blks_per_cluster,
+					 XBF_UNMAPPED);
 		if (!fbuf)
 			return ENOMEM;
 		/*

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
  2012-11-19 23:53               ` Dave Chinner
@ 2012-11-20  7:09                 ` Torsten Kaiser
  -1 siblings, 0 replies; 31+ messages in thread
From: Torsten Kaiser @ 2012-11-20  7:09 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs, Linux Kernel

On Tue, Nov 20, 2012 at 12:53 AM, Dave Chinner <david@fromorbit.com> wrote:
> On Mon, Nov 19, 2012 at 07:50:06AM +0100, Torsten Kaiser wrote:
> So, both lockdep thingy's are the same:

I suspected this, but as the reports where slightly different I
attached bith of them, as I couldn't decide witch one was the
better/simpler report to debug this.

>> [110926.972482] =========================================================
>> [110926.972484] [ INFO: possible irq lock inversion dependency detected ]
>> [110926.972486] 3.7.0-rc4 #1 Not tainted
>> [110926.972487] ---------------------------------------------------------
>> [110926.972489] kswapd0/725 just changed the state of lock:
>> [110926.972490]  (sb_internal){.+.+.?}, at: [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
>> [110926.972499] but this lock took another, RECLAIM_FS-unsafe lock in the past:
>> [110926.972500]  (&(&ip->i_lock)->mr_lock/1){+.+.+.}
>
> Ah, what? Since when has the ilock been reclaim unsafe?
>
>> [110926.972500] and interrupts could create inverse lock ordering between them.
>> [110926.972500]
>> [110926.972503]
>> [110926.972503] other info that might help us debug this:
>> [110926.972504]  Possible interrupt unsafe locking scenario:
>> [110926.972504]
>> [110926.972505]        CPU0                    CPU1
>> [110926.972506]        ----                    ----
>> [110926.972507]   lock(&(&ip->i_lock)->mr_lock/1);
>> [110926.972509]                                local_irq_disable();
>> [110926.972509]                                lock(sb_internal);
>> [110926.972511]                                lock(&(&ip->i_lock)->mr_lock/1);
>> [110926.972512]   <Interrupt>
>> [110926.972513]     lock(sb_internal);
>
> Um, that's just bizzare. No XFS code runs with interrupts disabled,
> so I cannot see how this possible.
>
> .....
>
>
>        [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
>        [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
>        [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
>        [<ffffffff810dba31>] vm_map_ram+0x271/0x770
>        [<ffffffff811e1316>] _xfs_buf_map_pages+0x46/0xe0
>        [<ffffffff811e222a>] xfs_buf_get_map+0x8a/0x130
>        [<ffffffff81233ab9>] xfs_trans_get_buf_map+0xa9/0xd0
>        [<ffffffff8121bced>] xfs_ialloc_inode_init+0xcd/0x1d0
>
> We shouldn't be mapping buffers there, there's a patch below to fix
> this. It's probably the source of this report, even though I cannot
> lockdep seems to be off with the fairies...

I also tried to understand what lockdep was saying, but
Documentation/lockdep-design.txt is not too helpful.
I think 'CLASS'-ON-R / -ON-W means that this lock was 'ON' / held
while 'CLASS' (HARDIRQ, SOFTIRQ, RECLAIM_FS) happend and that makes
this lock unsafe for these contexts. IN-'CLASS'-R / -W seems to be
'lock taken in context 'CLASS'.
A note that 'CLASS'-ON-? means 'CLASS'-unsafe in there would be helpful to me...

Wrt. above interrupt output: I think lockdep doesn't really know about
RECLAIM_FS and threats it as another interrupt. I think that output
should have been something like this:
        CPU0                    CPU1
        ----                    ----
   lock(&(&ip->i_lock)->mr_lock/1);
                                <Allocation enters reclaim>
                                lock(sb_internal);
                                lock(&(&ip->i_lock)->mr_lock/1);
   <Allocation enters reclaim>
     lock(sb_internal);

Entering reclaim on CPU1 would mean that CPU1 would not enter reclaim
again, so the reclaim-'interrupt' would be disabled.
And instead of interrupts disrupting the normal codeflow on CPU0 it
would be 'interrupted' be instead of doing a normal allocation, it
would 'interrupt' the allocation to reclaim memory.
print_irq_lock_scenario() would need to be taught to print a slightly
different message for reclaim-'interrupts'.

I will try your patch, but as I do not have a reliable reproducer to
create this lockdep report, I can't really verify if this fixes it.
But I will definitely mail you, if it happens again with this patch.

Thanks, Torsten

> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
>
> xfs: inode allocation should use unmapped buffers.
>
> From: Dave Chinner <dchinner@redhat.com>
>
> Inode buffers do not need to be mapped as inodes are read or written
> directly from/to the pages underlying the buffer. This fixes a
> regression introduced by commit 611c994 ("xfs: make XBF_MAPPED the
> default behaviour").
>
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
>  fs/xfs/xfs_ialloc.c |    3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/fs/xfs/xfs_ialloc.c b/fs/xfs/xfs_ialloc.c
> index 2d6495e..a815412 100644
> --- a/fs/xfs/xfs_ialloc.c
> +++ b/fs/xfs/xfs_ialloc.c
> @@ -200,7 +200,8 @@ xfs_ialloc_inode_init(
>                  */
>                 d = XFS_AGB_TO_DADDR(mp, agno, agbno + (j * blks_per_cluster));
>                 fbuf = xfs_trans_get_buf(tp, mp->m_ddev_targp, d,
> -                                        mp->m_bsize * blks_per_cluster, 0);
> +                                        mp->m_bsize * blks_per_cluster,
> +                                        XBF_UNMAPPED);
>                 if (!fbuf)
>                         return ENOMEM;
>                 /*

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
@ 2012-11-20  7:09                 ` Torsten Kaiser
  0 siblings, 0 replies; 31+ messages in thread
From: Torsten Kaiser @ 2012-11-20  7:09 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Linux Kernel, xfs

On Tue, Nov 20, 2012 at 12:53 AM, Dave Chinner <david@fromorbit.com> wrote:
> On Mon, Nov 19, 2012 at 07:50:06AM +0100, Torsten Kaiser wrote:
> So, both lockdep thingy's are the same:

I suspected this, but as the reports where slightly different I
attached bith of them, as I couldn't decide witch one was the
better/simpler report to debug this.

>> [110926.972482] =========================================================
>> [110926.972484] [ INFO: possible irq lock inversion dependency detected ]
>> [110926.972486] 3.7.0-rc4 #1 Not tainted
>> [110926.972487] ---------------------------------------------------------
>> [110926.972489] kswapd0/725 just changed the state of lock:
>> [110926.972490]  (sb_internal){.+.+.?}, at: [<ffffffff8122b268>] xfs_trans_alloc+0x28/0x50
>> [110926.972499] but this lock took another, RECLAIM_FS-unsafe lock in the past:
>> [110926.972500]  (&(&ip->i_lock)->mr_lock/1){+.+.+.}
>
> Ah, what? Since when has the ilock been reclaim unsafe?
>
>> [110926.972500] and interrupts could create inverse lock ordering between them.
>> [110926.972500]
>> [110926.972503]
>> [110926.972503] other info that might help us debug this:
>> [110926.972504]  Possible interrupt unsafe locking scenario:
>> [110926.972504]
>> [110926.972505]        CPU0                    CPU1
>> [110926.972506]        ----                    ----
>> [110926.972507]   lock(&(&ip->i_lock)->mr_lock/1);
>> [110926.972509]                                local_irq_disable();
>> [110926.972509]                                lock(sb_internal);
>> [110926.972511]                                lock(&(&ip->i_lock)->mr_lock/1);
>> [110926.972512]   <Interrupt>
>> [110926.972513]     lock(sb_internal);
>
> Um, that's just bizzare. No XFS code runs with interrupts disabled,
> so I cannot see how this possible.
>
> .....
>
>
>        [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
>        [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
>        [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
>        [<ffffffff810dba31>] vm_map_ram+0x271/0x770
>        [<ffffffff811e1316>] _xfs_buf_map_pages+0x46/0xe0
>        [<ffffffff811e222a>] xfs_buf_get_map+0x8a/0x130
>        [<ffffffff81233ab9>] xfs_trans_get_buf_map+0xa9/0xd0
>        [<ffffffff8121bced>] xfs_ialloc_inode_init+0xcd/0x1d0
>
> We shouldn't be mapping buffers there, there's a patch below to fix
> this. It's probably the source of this report, even though I cannot
> lockdep seems to be off with the fairies...

I also tried to understand what lockdep was saying, but
Documentation/lockdep-design.txt is not too helpful.
I think 'CLASS'-ON-R / -ON-W means that this lock was 'ON' / held
while 'CLASS' (HARDIRQ, SOFTIRQ, RECLAIM_FS) happend and that makes
this lock unsafe for these contexts. IN-'CLASS'-R / -W seems to be
'lock taken in context 'CLASS'.
A note that 'CLASS'-ON-? means 'CLASS'-unsafe in there would be helpful to me...

Wrt. above interrupt output: I think lockdep doesn't really know about
RECLAIM_FS and threats it as another interrupt. I think that output
should have been something like this:
        CPU0                    CPU1
        ----                    ----
   lock(&(&ip->i_lock)->mr_lock/1);
                                <Allocation enters reclaim>
                                lock(sb_internal);
                                lock(&(&ip->i_lock)->mr_lock/1);
   <Allocation enters reclaim>
     lock(sb_internal);

Entering reclaim on CPU1 would mean that CPU1 would not enter reclaim
again, so the reclaim-'interrupt' would be disabled.
And instead of interrupts disrupting the normal codeflow on CPU0 it
would be 'interrupted' be instead of doing a normal allocation, it
would 'interrupt' the allocation to reclaim memory.
print_irq_lock_scenario() would need to be taught to print a slightly
different message for reclaim-'interrupts'.

I will try your patch, but as I do not have a reliable reproducer to
create this lockdep report, I can't really verify if this fixes it.
But I will definitely mail you, if it happens again with this patch.

Thanks, Torsten

> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
>
> xfs: inode allocation should use unmapped buffers.
>
> From: Dave Chinner <dchinner@redhat.com>
>
> Inode buffers do not need to be mapped as inodes are read or written
> directly from/to the pages underlying the buffer. This fixes a
> regression introduced by commit 611c994 ("xfs: make XBF_MAPPED the
> default behaviour").
>
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
>  fs/xfs/xfs_ialloc.c |    3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/fs/xfs/xfs_ialloc.c b/fs/xfs/xfs_ialloc.c
> index 2d6495e..a815412 100644
> --- a/fs/xfs/xfs_ialloc.c
> +++ b/fs/xfs/xfs_ialloc.c
> @@ -200,7 +200,8 @@ xfs_ialloc_inode_init(
>                  */
>                 d = XFS_AGB_TO_DADDR(mp, agno, agbno + (j * blks_per_cluster));
>                 fbuf = xfs_trans_get_buf(tp, mp->m_ddev_targp, d,
> -                                        mp->m_bsize * blks_per_cluster, 0);
> +                                        mp->m_bsize * blks_per_cluster,
> +                                        XBF_UNMAPPED);
>                 if (!fbuf)
>                         return ENOMEM;
>                 /*

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
  2012-11-19 23:53               ` Dave Chinner
@ 2012-11-20 19:45                 ` Torsten Kaiser
  -1 siblings, 0 replies; 31+ messages in thread
From: Torsten Kaiser @ 2012-11-20 19:45 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs, Linux Kernel

[-- Attachment #1: Type: text/plain, Size: 2768 bytes --]

On Tue, Nov 20, 2012 at 12:53 AM, Dave Chinner <david@fromorbit.com> wrote:
>        [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
>        [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
>        [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
>        [<ffffffff810dba31>] vm_map_ram+0x271/0x770
>        [<ffffffff811e1316>] _xfs_buf_map_pages+0x46/0xe0
>        [<ffffffff811e222a>] xfs_buf_get_map+0x8a/0x130
>        [<ffffffff81233ab9>] xfs_trans_get_buf_map+0xa9/0xd0
>        [<ffffffff8121bced>] xfs_ialloc_inode_init+0xcd/0x1d0
>
> We shouldn't be mapping buffers there, there's a patch below to fix
> this. It's probably the source of this report, even though I cannot
> lockdep seems to be off with the fairies...

That patch seems to break my system.
After it started to swap, because I was compiling seamonkey (firefox
turned into the full navigator suite) on a tmpfs, several processes
got stuck and triggered the hung-task-check.
As a kswapd, xfsaild/md4 and flush-9:4 also got stuck, not even a
shutdown worked.

The attached log first contains the hung-task-notices, then the output
from SysRq+W.

After the shutdown got stuck trying to turn of swap, I first tries
SysRq+S, but did not get a 'Done' and on SysRq+U lockdep complained
about an lock imbalance wrt. sb_writer. SysRq+O also did no longer
work, only SysRq+B.

I don't know which one got stuck first, but I'm somewhat suspicious of
the plasma-desktop and the sshd that SysRq+W reported stuck in xfs
reclaim, even if these processes did never trigger the hung task
check.

Torsten

> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
>
> xfs: inode allocation should use unmapped buffers.
>
> From: Dave Chinner <dchinner@redhat.com>
>
> Inode buffers do not need to be mapped as inodes are read or written
> directly from/to the pages underlying the buffer. This fixes a
> regression introduced by commit 611c994 ("xfs: make XBF_MAPPED the
> default behaviour").
>
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
>  fs/xfs/xfs_ialloc.c |    3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/fs/xfs/xfs_ialloc.c b/fs/xfs/xfs_ialloc.c
> index 2d6495e..a815412 100644
> --- a/fs/xfs/xfs_ialloc.c
> +++ b/fs/xfs/xfs_ialloc.c
> @@ -200,7 +200,8 @@ xfs_ialloc_inode_init(
>                  */
>                 d = XFS_AGB_TO_DADDR(mp, agno, agbno + (j * blks_per_cluster));
>                 fbuf = xfs_trans_get_buf(tp, mp->m_ddev_targp, d,
> -                                        mp->m_bsize * blks_per_cluster, 0);
> +                                        mp->m_bsize * blks_per_cluster,
> +                                        XBF_UNMAPPED);
>                 if (!fbuf)
>                         return ENOMEM;
>                 /*

[-- Attachment #2: xfs-reclaim-hang-messages.txt --]
[-- Type: text/plain, Size: 79805 bytes --]

Nov 20 19:24:55 thoregon acpid: client 2991[0:0] has disconnected
Nov 20 19:25:07 thoregon login[3002]: pam_unix(login:session): session opened for user root by LOGIN(uid=0)
Nov 20 19:25:07 thoregon login[27176]: ROOT LOGIN  on '/dev/tty1'
Nov 20 19:27:23 thoregon kernel: [ 2160.595990] INFO: task kswapd0:725 blocked for more than 120 seconds.
Nov 20 19:27:23 thoregon kernel: [ 2160.595998] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 19:27:23 thoregon kernel: [ 2160.596004] kswapd0         D 0000000000000001     0   725      2 0x00000000
Nov 20 19:27:23 thoregon kernel: [ 2160.596016]  ffff8803280d13f8 0000000000000046 ffff880329a0ab80 ffff8803280d1fd8
Nov 20 19:27:23 thoregon kernel: [ 2160.596030]  ffff8803280d1fd8 ffff8803280d1fd8 ffff880046b7c880 ffff880329a0ab80
Nov 20 19:27:23 thoregon kernel: [ 2160.596041]  ffff8803280d1408 ffff8803278dbbd0 ffff8803278db800 00000000ffffffff
Nov 20 19:27:23 thoregon kernel: [ 2160.596052] Call Trace:
Nov 20 19:27:23 thoregon kernel: [ 2160.596070]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596080]  [<ffffffff814f9dad>] md_super_wait+0x4d/0x80
Nov 20 19:27:23 thoregon kernel: [ 2160.596091]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596100]  [<ffffffff81500753>] bitmap_unplug+0x173/0x180
Nov 20 19:27:23 thoregon kernel: [ 2160.596109]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.596118]  [<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
Nov 20 19:27:23 thoregon kernel: [ 2160.596130]  [<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
Nov 20 19:27:23 thoregon kernel: [ 2160.596138]  [<ffffffff816b24ca>] ? _raw_spin_unlock_irqrestore+0x3a/0x70
Nov 20 19:27:23 thoregon kernel: [ 2160.596148]  [<ffffffff816b15c3>] io_schedule_timeout+0x83/0xf0
Nov 20 19:27:23 thoregon kernel: [ 2160.596156]  [<ffffffff810815cd>] ? trace_hardirqs_on+0xd/0x10
Nov 20 19:27:23 thoregon kernel: [ 2160.596166]  [<ffffffff810b0e1d>] mempool_alloc+0x12d/0x160
Nov 20 19:27:23 thoregon kernel: [ 2160.596174]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596184]  [<ffffffff811263da>] bvec_alloc_bs+0xda/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.596192]  [<ffffffff811264ea>] bio_alloc_bioset+0xea/0x110
Nov 20 19:27:23 thoregon kernel: [ 2160.596201]  [<ffffffff81126656>] bio_clone_bioset+0x16/0x40
Nov 20 19:27:23 thoregon kernel: [ 2160.596208]  [<ffffffff814f471a>] bio_clone_mddev+0x1a/0x30
Nov 20 19:27:23 thoregon kernel: [ 2160.596217]  [<ffffffff814edbb1>] make_request+0x551/0xde0
Nov 20 19:27:23 thoregon kernel: [ 2160.596224]  [<ffffffff814ed9e0>] ? make_request+0x380/0xde0
Nov 20 19:27:23 thoregon kernel: [ 2160.596233]  [<ffffffff810b0b80>] ? mempool_alloc_slab+0x10/0x20
Nov 20 19:27:23 thoregon kernel: [ 2160.596242]  [<ffffffff814f80bb>] md_make_request+0x21b/0x4d0
Nov 20 19:27:23 thoregon kernel: [ 2160.596249]  [<ffffffff814f7ef0>] ? md_make_request+0x50/0x4d0
Nov 20 19:27:23 thoregon kernel: [ 2160.596258]  [<ffffffff81276e52>] generic_make_request+0xc2/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.596267]  [<ffffffff81276ef5>] submit_bio+0x65/0x110
Nov 20 19:27:23 thoregon kernel: [ 2160.596278]  [<ffffffff811e07bf>] xfs_submit_ioend_bio.isra.21+0x2f/0x40
Nov 20 19:27:23 thoregon kernel: [ 2160.596286]  [<ffffffff811e088e>] xfs_submit_ioend+0xbe/0x110
Nov 20 19:27:23 thoregon kernel: [ 2160.596295]  [<ffffffff811e0c91>] xfs_vm_writepage+0x3b1/0x540
Nov 20 19:27:23 thoregon kernel: [ 2160.596305]  [<ffffffff810bcd84>] shrink_page_list+0x564/0x890
Nov 20 19:27:23 thoregon kernel: [ 2160.596315]  [<ffffffff810bd637>] shrink_inactive_list+0x1d7/0x310
Nov 20 19:27:23 thoregon kernel: [ 2160.596325]  [<ffffffff810bdb9d>] shrink_lruvec+0x42d/0x530
Nov 20 19:27:23 thoregon kernel: [ 2160.596334]  [<ffffffff810be323>] kswapd+0x683/0xa20
Nov 20 19:27:23 thoregon kernel: [ 2160.596344]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596352]  [<ffffffff810bdca0>] ? shrink_lruvec+0x530/0x530
Nov 20 19:27:23 thoregon kernel: [ 2160.596360]  [<ffffffff8105c246>] kthread+0xd6/0xe0
Nov 20 19:27:23 thoregon kernel: [ 2160.596367]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.596376]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:27:23 thoregon kernel: [ 2160.596384]  [<ffffffff816b31ac>] ret_from_fork+0x7c/0xb0
Nov 20 19:27:23 thoregon kernel: [ 2160.596392]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:27:23 thoregon kernel: [ 2160.596397] no locks held by kswapd0/725.
Nov 20 19:27:23 thoregon kernel: [ 2160.596424] INFO: task xfsaild/md4:1742 blocked for more than 120 seconds.
Nov 20 19:27:23 thoregon kernel: [ 2160.596428] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 19:27:23 thoregon kernel: [ 2160.596432] xfsaild/md4     D 0000000000000003     0  1742      2 0x00000000
Nov 20 19:27:23 thoregon kernel: [ 2160.596441]  ffff88032438bb68 0000000000000046 ffff880329965700 ffff88032438bfd8
Nov 20 19:27:23 thoregon kernel: [ 2160.596452]  ffff88032438bfd8 ffff88032438bfd8 ffff88032827e580 ffff880329965700
Nov 20 19:27:23 thoregon kernel: [ 2160.596462]  ffff88032438bb78 ffff8803278dbbd0 ffff8803278db800 00000000ffffffff
Nov 20 19:27:23 thoregon kernel: [ 2160.596473] Call Trace:
Nov 20 19:27:23 thoregon kernel: [ 2160.596483]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596490]  [<ffffffff814f9dad>] md_super_wait+0x4d/0x80
Nov 20 19:27:23 thoregon kernel: [ 2160.596498]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596507]  [<ffffffff81500753>] bitmap_unplug+0x173/0x180
Nov 20 19:27:23 thoregon kernel: [ 2160.596516]  [<ffffffff81278c13>] ? blk_finish_plug+0x13/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.596524]  [<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
Nov 20 19:27:23 thoregon kernel: [ 2160.596533]  [<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
Nov 20 19:27:23 thoregon kernel: [ 2160.596542]  [<ffffffff81278c13>] blk_finish_plug+0x13/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.596550]  [<ffffffff811e296a>] __xfs_buf_delwri_submit+0x1ca/0x1e0
Nov 20 19:27:23 thoregon kernel: [ 2160.596558]  [<ffffffff811e2ffb>] xfs_buf_delwri_submit_nowait+0x1b/0x20
Nov 20 19:27:23 thoregon kernel: [ 2160.596566]  [<ffffffff81233066>] xfsaild+0x226/0x4c0
Nov 20 19:27:23 thoregon kernel: [ 2160.596575]  [<ffffffff81065dfa>] ? finish_task_switch+0x3a/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.596583]  [<ffffffff81232e40>] ? xfs_trans_ail_cursor_first+0xa0/0xa0
Nov 20 19:27:23 thoregon kernel: [ 2160.596591]  [<ffffffff8105c246>] kthread+0xd6/0xe0
Nov 20 19:27:23 thoregon kernel: [ 2160.596598]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.596607]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:27:23 thoregon kernel: [ 2160.596614]  [<ffffffff816b31ac>] ret_from_fork+0x7c/0xb0
Nov 20 19:27:23 thoregon kernel: [ 2160.596622]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:27:23 thoregon kernel: [ 2160.596627] no locks held by xfsaild/md4/1742.
Nov 20 19:27:23 thoregon kernel: [ 2160.596709] INFO: task flush-9:4:30365 blocked for more than 120 seconds.
Nov 20 19:27:23 thoregon kernel: [ 2160.596712] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 19:27:23 thoregon kernel: [ 2160.596716] flush-9:4       D 0000000000000001     0 30365      2 0x00000000
Nov 20 19:27:23 thoregon kernel: [ 2160.596725]  ffff8801d8ce1978 0000000000000046 ffff880328238e80 ffff8801d8ce1fd8
Nov 20 19:27:23 thoregon kernel: [ 2160.596735]  ffff8801d8ce1fd8 ffff8801d8ce1fd8 ffff8803032cc880 ffff880328238e80
Nov 20 19:27:23 thoregon kernel: [ 2160.596745]  ffff8801d8ce1988 ffff8803278dbbd0 ffff8803278db800 00000000ffffffff
Nov 20 19:27:23 thoregon kernel: [ 2160.596756] Call Trace:
Nov 20 19:27:23 thoregon kernel: [ 2160.596765]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596773]  [<ffffffff814f9dad>] md_super_wait+0x4d/0x80
Nov 20 19:27:23 thoregon kernel: [ 2160.596781]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596788]  [<ffffffff81500753>] bitmap_unplug+0x173/0x180
Nov 20 19:27:23 thoregon kernel: [ 2160.596796]  [<ffffffff810b6acf>] ? write_cache_pages+0x12f/0x420
Nov 20 19:27:23 thoregon kernel: [ 2160.596804]  [<ffffffff810b6700>] ? set_page_dirty_lock+0x60/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596812]  [<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
Nov 20 19:27:23 thoregon kernel: [ 2160.596821]  [<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
Nov 20 19:27:23 thoregon kernel: [ 2160.596830]  [<ffffffff81278c13>] blk_finish_plug+0x13/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.596838]  [<ffffffff810b6e0a>] generic_writepages+0x4a/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596847]  [<ffffffff811df898>] xfs_vm_writepages+0x48/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596856]  [<ffffffff8111b144>] ? writeback_sb_inodes+0x174/0x410
Nov 20 19:27:23 thoregon kernel: [ 2160.596864]  [<ffffffff810b84be>] do_writepages+0x1e/0x30
Nov 20 19:27:23 thoregon kernel: [ 2160.596873]  [<ffffffff8111a1ee>] __writeback_single_inode+0x3e/0x120
Nov 20 19:27:23 thoregon kernel: [ 2160.596882]  [<ffffffff8111b234>] writeback_sb_inodes+0x264/0x410
Nov 20 19:27:23 thoregon kernel: [ 2160.596891]  [<ffffffff8111b476>] __writeback_inodes_wb+0x96/0xc0
Nov 20 19:27:23 thoregon kernel: [ 2160.596900]  [<ffffffff8111b673>] wb_writeback+0x1d3/0x1e0
Nov 20 19:27:23 thoregon kernel: [ 2160.596908]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.596916]  [<ffffffff8111bb1c>] ? wb_do_writeback+0x7c/0x140
Nov 20 19:27:23 thoregon kernel: [ 2160.596925]  [<ffffffff8111bb40>] wb_do_writeback+0xa0/0x140
Nov 20 19:27:23 thoregon kernel: [ 2160.596935]  [<ffffffff8111bc52>] bdi_writeback_thread+0x72/0x150
Nov 20 19:27:23 thoregon kernel: [ 2160.596944]  [<ffffffff8111bbe0>] ? wb_do_writeback+0x140/0x140
Nov 20 19:27:23 thoregon kernel: [ 2160.596951]  [<ffffffff8105c246>] kthread+0xd6/0xe0
Nov 20 19:27:23 thoregon kernel: [ 2160.596958]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.596967]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:27:23 thoregon kernel: [ 2160.596975]  [<ffffffff816b31ac>] ret_from_fork+0x7c/0xb0
Nov 20 19:27:23 thoregon kernel: [ 2160.596983]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:27:23 thoregon kernel: [ 2160.596988] 1 lock held by flush-9:4/30365:
Nov 20 19:27:23 thoregon kernel: [ 2160.596991]  #0:  (&type->s_umount_key#20){++++.+}, at: [<ffffffff810f59de>] grab_super_passive+0x3e/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.597019] INFO: task x86_64-pc-linux:26516 blocked for more than 120 seconds.
Nov 20 19:27:23 thoregon kernel: [ 2160.597023] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 19:27:23 thoregon kernel: [ 2160.597027] x86_64-pc-linux D 0000000000000000     0 26516  26114 0x00000000
Nov 20 19:27:23 thoregon kernel: [ 2160.597035]  ffff8800468dddf8 0000000000000046 ffff880329303a00 ffff8800468ddfd8
Nov 20 19:27:23 thoregon kernel: [ 2160.597045]  ffff8800468ddfd8 ffff8800468ddfd8 ffff88032989c880 ffff880329303a00
Nov 20 19:27:23 thoregon kernel: [ 2160.597056]  0000000000000246 ffff8800468dc000 ffff8801b1a1bc68 0000000002405150
Nov 20 19:27:23 thoregon kernel: [ 2160.597066] Call Trace:
Nov 20 19:27:23 thoregon kernel: [ 2160.597076]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.597085]  [<ffffffff816b1501>] schedule_preempt_disabled+0x21/0x30
Nov 20 19:27:23 thoregon kernel: [ 2160.597094]  [<ffffffff816af3a5>] mutex_lock_nested+0x165/0x330
Nov 20 19:27:23 thoregon kernel: [ 2160.597103]  [<ffffffff810ff5e3>] ? do_unlinkat+0xa3/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.597112]  [<ffffffff810fefae>] ? filename_lookup.isra.57+0x2e/0x80
Nov 20 19:27:23 thoregon kernel: [ 2160.597121]  [<ffffffff810ff5e3>] do_unlinkat+0xa3/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.597129]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.597137]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:27:23 thoregon kernel: [ 2160.597146]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:27:23 thoregon kernel: [ 2160.597154]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:27:23 thoregon kernel: [ 2160.597159] 2 locks held by x86_64-pc-linux/26516:
Nov 20 19:27:23 thoregon kernel: [ 2160.597162]  #0:  (sb_writers#6){.+.+.+}, at: [<ffffffff811135ef>] mnt_want_write+0x1f/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597178]  #1:  (&type->i_mutex_dir_key#3/1){+.+.+.}, at: [<ffffffff810ff5e3>] do_unlinkat+0xa3/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.597198] INFO: task x86_64-pc-linux:26523 blocked for more than 120 seconds.
Nov 20 19:27:23 thoregon kernel: [ 2160.597201] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 19:27:23 thoregon kernel: [ 2160.597205] x86_64-pc-linux D 0000000000000000     0 26523  26114 0x00000000
Nov 20 19:27:23 thoregon kernel: [ 2160.597212]  ffff8800c0def858 0000000000000046 ffff88032986c880 ffff8800c0deffd8
Nov 20 19:27:23 thoregon kernel: [ 2160.597223]  ffff8800c0deffd8 ffff8800c0deffd8 ffff88032989e580 ffff88032986c880
Nov 20 19:27:23 thoregon kernel: [ 2160.597233]  ffff8800c0def928 ffff8801c57e3a30 7fffffffffffffff ffff88032986c880
Nov 20 19:27:23 thoregon kernel: [ 2160.597243] Call Trace:
Nov 20 19:27:23 thoregon kernel: [ 2160.597253]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.597261]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:27:23 thoregon kernel: [ 2160.597269]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597277]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.597286]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:27:23 thoregon kernel: [ 2160.597294]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:27:23 thoregon kernel: [ 2160.597301]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597308]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.597315]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:27:23 thoregon kernel: [ 2160.597323]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:27:23 thoregon kernel: [ 2160.597331]  [<ffffffff81210417>] ? xfs_dabuf_map.isra.23+0xb7/0x2f0
Nov 20 19:27:23 thoregon kernel: [ 2160.597339]  [<ffffffff810b572e>] ? get_page_from_freelist+0x1de/0x460
Nov 20 19:27:23 thoregon kernel: [ 2160.597346]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.597353]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:27:23 thoregon kernel: [ 2160.597361]  [<ffffffff812116b6>] xfs_da_read_buf+0xb6/0x180
Nov 20 19:27:23 thoregon kernel: [ 2160.597371]  [<ffffffff8121724c>] xfs_dir2_leaf_lookup_int+0x5c/0x2c0
Nov 20 19:27:23 thoregon kernel: [ 2160.597380]  [<ffffffff81217583>] xfs_dir2_leaf_removename+0x33/0x330
Nov 20 19:27:23 thoregon kernel: [ 2160.597389]  [<ffffffff8120777b>] ? xfs_bmap_last_offset+0x8b/0xc0
Nov 20 19:27:23 thoregon kernel: [ 2160.597398]  [<ffffffff81213b94>] xfs_dir_removename+0x134/0x150
Nov 20 19:27:23 thoregon kernel: [ 2160.597406]  [<ffffffff811f5ce6>] xfs_remove+0x1f6/0x370
Nov 20 19:27:23 thoregon kernel: [ 2160.597416]  [<ffffffff816af4ac>] ? mutex_lock_nested+0x26c/0x330
Nov 20 19:27:23 thoregon kernel: [ 2160.597424]  [<ffffffff810ff48f>] ? vfs_unlink+0x4f/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.597432]  [<ffffffff811ec7f3>] xfs_vn_unlink+0x43/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.597441]  [<ffffffff810ff4cd>] vfs_unlink+0x8d/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.597449]  [<ffffffff810ff63a>] do_unlinkat+0xfa/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.597458]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.597466]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:27:23 thoregon kernel: [ 2160.597475]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:27:23 thoregon kernel: [ 2160.597482]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:27:23 thoregon kernel: [ 2160.597487] 6 locks held by x86_64-pc-linux/26523:
Nov 20 19:27:23 thoregon kernel: [ 2160.597490]  #0:  (sb_writers#6){.+.+.+}, at: [<ffffffff811135ef>] mnt_want_write+0x1f/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597505]  #1:  (&type->i_mutex_dir_key#3/1){+.+.+.}, at: [<ffffffff810ff5e3>] do_unlinkat+0xa3/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.597522]  #2:  (&sb->s_type->i_mutex_key#7){+.+.+.}, at: [<ffffffff810ff48f>] vfs_unlink+0x4f/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.597538]  #3:  (sb_internal){.+.+.?}, at: [<ffffffff8122b3e8>] xfs_trans_alloc+0x28/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597552]  #4:  (&(&ip->i_lock)->mr_lock/4){+.+...}, at: [<ffffffff811e8414>] xfs_ilock+0x84/0xb0
Nov 20 19:27:23 thoregon kernel: [ 2160.597567]  #5:  (&(&ip->i_lock)->mr_lock/5){+.+...}, at: [<ffffffff811e8414>] xfs_ilock+0x84/0xb0
Nov 20 19:27:23 thoregon kernel: [ 2160.597584] INFO: task x86_64-pc-linux:26875 blocked for more than 120 seconds.
Nov 20 19:27:23 thoregon kernel: [ 2160.597587] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 19:27:23 thoregon kernel: [ 2160.597591] x86_64-pc-linux D 0000000000000000     0 26875  26114 0x00000000
Nov 20 19:27:23 thoregon kernel: [ 2160.597598]  ffff8800c0c5ba08 0000000000000046 ffff8802f0594880 ffff8800c0c5bfd8
Nov 20 19:27:23 thoregon kernel: [ 2160.597609]  ffff8800c0c5bfd8 ffff8800c0c5bfd8 ffff88032989c880 ffff8802f0594880
Nov 20 19:27:23 thoregon kernel: [ 2160.597619]  0000000000000250 ffff880205a7aa30 7fffffffffffffff ffff8802f0594880
Nov 20 19:27:23 thoregon kernel: [ 2160.597630] Call Trace:
Nov 20 19:27:23 thoregon kernel: [ 2160.597639]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.597648]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:27:23 thoregon kernel: [ 2160.597655]  [<ffffffff811f70b4>] ? kmem_zone_zalloc+0x34/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597662]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597670]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.597679]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:27:23 thoregon kernel: [ 2160.597688]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:27:23 thoregon kernel: [ 2160.597695]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597702]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.597708]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:27:23 thoregon kernel: [ 2160.597716]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:27:23 thoregon kernel: [ 2160.597723]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.597730]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:27:23 thoregon kernel: [ 2160.597739]  [<ffffffff8121ce6f>] xfs_read_agi+0x7f/0x110
Nov 20 19:27:23 thoregon kernel: [ 2160.597748]  [<ffffffff8121fb55>] xfs_iunlink+0x45/0x150
Nov 20 19:27:23 thoregon kernel: [ 2160.597757]  [<ffffffff81041fc1>] ? current_fs_time+0x11/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597764]  [<ffffffff81234352>] ? xfs_trans_ichgtime+0x22/0xa0
Nov 20 19:27:23 thoregon kernel: [ 2160.597772]  [<ffffffff811f3ea0>] xfs_droplink+0x50/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.597778]  [<ffffffff811f5d77>] xfs_remove+0x287/0x370
Nov 20 19:27:23 thoregon kernel: [ 2160.597788]  [<ffffffff816af4ac>] ? mutex_lock_nested+0x26c/0x330
Nov 20 19:27:23 thoregon kernel: [ 2160.597796]  [<ffffffff810ff48f>] ? vfs_unlink+0x4f/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.597804]  [<ffffffff811ec7f3>] xfs_vn_unlink+0x43/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.597813]  [<ffffffff810ff4cd>] vfs_unlink+0x8d/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.597821]  [<ffffffff810ff63a>] do_unlinkat+0xfa/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.597830]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.597837]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:27:23 thoregon kernel: [ 2160.597846]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:27:23 thoregon kernel: [ 2160.597854]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:27:23 thoregon kernel: [ 2160.597859] 6 locks held by x86_64-pc-linux/26875:
Nov 20 19:27:23 thoregon kernel: [ 2160.597862]  #0:  (sb_writers#6){.+.+.+}, at: [<ffffffff811135ef>] mnt_want_write+0x1f/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597877]  #1:  (&type->i_mutex_dir_key#3/1){+.+.+.}, at: [<ffffffff810ff5e3>] do_unlinkat+0xa3/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.597894]  #2:  (&sb->s_type->i_mutex_key#7){+.+.+.}, at: [<ffffffff810ff48f>] vfs_unlink+0x4f/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.597909]  #3:  (sb_internal){.+.+.?}, at: [<ffffffff8122b3e8>] xfs_trans_alloc+0x28/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597923]  #4:  (&(&ip->i_lock)->mr_lock/4){+.+...}, at: [<ffffffff811e8414>] xfs_ilock+0x84/0xb0
Nov 20 19:27:23 thoregon kernel: [ 2160.597937]  #5:  (&(&ip->i_lock)->mr_lock/5){+.+...}, at: [<ffffffff811e8414>] xfs_ilock+0x84/0xb0
Nov 20 19:27:23 thoregon kernel: [ 2160.597955] INFO: task x86_64-pc-linux:26897 blocked for more than 120 seconds.
Nov 20 19:27:23 thoregon kernel: [ 2160.597958] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 19:27:23 thoregon kernel: [ 2160.597961] x86_64-pc-linux D 0000000000000001     0 26897  26114 0x00000000
Nov 20 19:27:23 thoregon kernel: [ 2160.597969]  ffff880101959858 0000000000000046 ffff88032934ab80 ffff880101959fd8
Nov 20 19:27:23 thoregon kernel: [ 2160.597980]  ffff880101959fd8 ffff880101959fd8 ffff880162a3e580 ffff88032934ab80
Nov 20 19:27:23 thoregon kernel: [ 2160.597990]  ffff880101959928 ffff8801ca545030 7fffffffffffffff ffff88032934ab80
Nov 20 19:27:23 thoregon kernel: [ 2160.598000] Call Trace:
Nov 20 19:27:23 thoregon kernel: [ 2160.598010]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.598018]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:27:23 thoregon kernel: [ 2160.598026]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.598034]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.598043]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:27:23 thoregon kernel: [ 2160.598052]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:27:23 thoregon kernel: [ 2160.598058]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.598065]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.598072]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:27:23 thoregon kernel: [ 2160.598079]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:27:23 thoregon kernel: [ 2160.598087]  [<ffffffff81210417>] ? xfs_dabuf_map.isra.23+0xb7/0x2f0
Nov 20 19:27:23 thoregon kernel: [ 2160.598094]  [<ffffffff810b572e>] ? get_page_from_freelist+0x1de/0x460
Nov 20 19:27:23 thoregon kernel: [ 2160.598102]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.598109]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:27:23 thoregon kernel: [ 2160.598117]  [<ffffffff812116b6>] xfs_da_read_buf+0xb6/0x180
Nov 20 19:27:23 thoregon kernel: [ 2160.598126]  [<ffffffff8121724c>] xfs_dir2_leaf_lookup_int+0x5c/0x2c0
Nov 20 19:27:23 thoregon kernel: [ 2160.598135]  [<ffffffff81217583>] xfs_dir2_leaf_removename+0x33/0x330
Nov 20 19:27:23 thoregon kernel: [ 2160.598144]  [<ffffffff8120777b>] ? xfs_bmap_last_offset+0x8b/0xc0
Nov 20 19:27:23 thoregon kernel: [ 2160.598152]  [<ffffffff81213b94>] xfs_dir_removename+0x134/0x150
Nov 20 19:27:23 thoregon kernel: [ 2160.598160]  [<ffffffff811f5ce6>] xfs_remove+0x1f6/0x370
Nov 20 19:27:23 thoregon kernel: [ 2160.598170]  [<ffffffff816af4ac>] ? mutex_lock_nested+0x26c/0x330
Nov 20 19:27:23 thoregon kernel: [ 2160.598178]  [<ffffffff810ff48f>] ? vfs_unlink+0x4f/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.598186]  [<ffffffff811ec7f3>] xfs_vn_unlink+0x43/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.598195]  [<ffffffff810ff4cd>] vfs_unlink+0x8d/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.598203]  [<ffffffff810ff63a>] do_unlinkat+0xfa/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.598211]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.598219]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:27:23 thoregon kernel: [ 2160.598228]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:27:23 thoregon kernel: [ 2160.598236]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:27:23 thoregon kernel: [ 2160.598241] 6 locks held by x86_64-pc-linux/26897:
Nov 20 19:27:23 thoregon kernel: [ 2160.598244]  #0:  (sb_writers#6){.+.+.+}, at: [<ffffffff811135ef>] mnt_want_write+0x1f/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.598258]  #1:  (&type->i_mutex_dir_key#3/1){+.+.+.}, at: [<ffffffff810ff5e3>] do_unlinkat+0xa3/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.598275]  #2:  (&sb->s_type->i_mutex_key#7){+.+.+.}, at: [<ffffffff810ff48f>] vfs_unlink+0x4f/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.598290]  #3:  (sb_internal){.+.+.?}, at: [<ffffffff8122b3e8>] xfs_trans_alloc+0x28/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.598304]  #4:  (&(&ip->i_lock)->mr_lock/4){+.+...}, at: [<ffffffff811e8414>] xfs_ilock+0x84/0xb0
Nov 20 19:27:23 thoregon kernel: [ 2160.598318]  #5:  (&(&ip->i_lock)->mr_lock){++++--}, at: [<ffffffff811e854c>] xfs_ilock_nowait+0xbc/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.598334] INFO: task x86_64-pc-linux:26904 blocked for more than 120 seconds.
Nov 20 19:27:23 thoregon kernel: [ 2160.598337] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 19:27:23 thoregon kernel: [ 2160.598341] x86_64-pc-linux D 0000000000000001     0 26904  26114 0x00000000
Nov 20 19:27:23 thoregon kernel: [ 2160.598348]  ffff880066bff858 0000000000000046 ffff880329301d00 ffff880066bfffd8
Nov 20 19:27:23 thoregon kernel: [ 2160.598359]  ffff880066bfffd8 ffff880066bfffd8 ffff88013a7ce580 ffff880329301d00
Nov 20 19:27:23 thoregon kernel: [ 2160.598369]  ffff880066bff928 ffff8801c572c830 7fffffffffffffff ffff880329301d00
Nov 20 19:27:23 thoregon kernel: [ 2160.598380] Call Trace:
Nov 20 19:27:23 thoregon kernel: [ 2160.598389]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.598398]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:27:23 thoregon kernel: [ 2160.598405]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.598413]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.598422]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:27:23 thoregon kernel: [ 2160.598431]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:27:23 thoregon kernel: [ 2160.598437]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.598444]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.598451]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:27:23 thoregon kernel: [ 2160.598458]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:27:23 thoregon kernel: [ 2160.598466]  [<ffffffff81210417>] ? xfs_dabuf_map.isra.23+0xb7/0x2f0
Nov 20 19:27:23 thoregon kernel: [ 2160.598474]  [<ffffffff810b572e>] ? get_page_from_freelist+0x1de/0x460
Nov 20 19:27:23 thoregon kernel: [ 2160.598481]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.598488]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:27:23 thoregon kernel: [ 2160.598496]  [<ffffffff812116b6>] xfs_da_read_buf+0xb6/0x180
Nov 20 19:27:23 thoregon kernel: [ 2160.598505]  [<ffffffff8121724c>] xfs_dir2_leaf_lookup_int+0x5c/0x2c0
Nov 20 19:27:23 thoregon kernel: [ 2160.598515]  [<ffffffff81217583>] xfs_dir2_leaf_removename+0x33/0x330
Nov 20 19:27:23 thoregon kernel: [ 2160.598523]  [<ffffffff8120777b>] ? xfs_bmap_last_offset+0x8b/0xc0
Nov 20 19:27:23 thoregon kernel: [ 2160.598532]  [<ffffffff81213b94>] xfs_dir_removename+0x134/0x150
Nov 20 19:27:23 thoregon kernel: [ 2160.598540]  [<ffffffff811f5ce6>] xfs_remove+0x1f6/0x370
Nov 20 19:27:23 thoregon kernel: [ 2160.598549]  [<ffffffff816af4ac>] ? mutex_lock_nested+0x26c/0x330
Nov 20 19:27:23 thoregon kernel: [ 2160.598557]  [<ffffffff810ff48f>] ? vfs_unlink+0x4f/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.598565]  [<ffffffff811ec7f3>] xfs_vn_unlink+0x43/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.598574]  [<ffffffff810ff4cd>] vfs_unlink+0x8d/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.598582]  [<ffffffff810ff63a>] do_unlinkat+0xfa/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.598591]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.598598]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:27:23 thoregon kernel: [ 2160.598607]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:27:23 thoregon kernel: [ 2160.598615]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:27:23 thoregon kernel: [ 2160.598620] 6 locks held by x86_64-pc-linux/26904:
Nov 20 19:27:23 thoregon kernel: [ 2160.598623]  #0:  (sb_writers#6){.+.+.+}, at: [<ffffffff811135ef>] mnt_want_write+0x1f/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.598637]  #1:  (&type->i_mutex_dir_key#3/1){+.+.+.}, at: [<ffffffff810ff5e3>] do_unlinkat+0xa3/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.598654]  #2:  (&sb->s_type->i_mutex_key#7){+.+.+.}, at: [<ffffffff810ff48f>] vfs_unlink+0x4f/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.598669]  #3:  (sb_internal){.+.+.?}, at: [<ffffffff8122b3e8>] xfs_trans_alloc+0x28/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.598683]  #4:  (&(&ip->i_lock)->mr_lock/4){+.+...}, at: [<ffffffff811e8414>] xfs_ilock+0x84/0xb0
Nov 20 19:27:23 thoregon kernel: [ 2160.598697]  #5:  (&(&ip->i_lock)->mr_lock){++++--}, at: [<ffffffff811e854c>] xfs_ilock_nowait+0xbc/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.598713] INFO: task x86_64-pc-linux:26907 blocked for more than 120 seconds.
Nov 20 19:27:23 thoregon kernel: [ 2160.598716] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 19:27:23 thoregon kernel: [ 2160.598720] x86_64-pc-linux D 0000000000000000     0 26907  26114 0x00000000
Nov 20 19:27:23 thoregon kernel: [ 2160.598727]  ffff8800c0eed858 0000000000000046 ffff8802a7be6580 ffff8800c0eedfd8
Nov 20 19:27:23 thoregon kernel: [ 2160.598738]  ffff8800c0eedfd8 ffff8800c0eedfd8 ffff88032989e580 ffff8802a7be6580
Nov 20 19:27:23 thoregon kernel: [ 2160.598748]  ffff8800c0eed928 ffff8801c55c7630 7fffffffffffffff ffff8802a7be6580
Nov 20 19:27:23 thoregon kernel: [ 2160.598758] Call Trace:
Nov 20 19:27:23 thoregon kernel: [ 2160.598768]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.598776]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:27:23 thoregon kernel: [ 2160.598784]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.598792]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.598801]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:27:23 thoregon kernel: [ 2160.598809]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:27:23 thoregon kernel: [ 2160.598816]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.598823]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.598830]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:27:23 thoregon kernel: [ 2160.598837]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:27:23 thoregon kernel: [ 2160.598844]  [<ffffffff81210417>] ? xfs_dabuf_map.isra.23+0xb7/0x2f0
Nov 20 19:27:23 thoregon kernel: [ 2160.598852]  [<ffffffff810b572e>] ? get_page_from_freelist+0x1de/0x460
Nov 20 19:27:23 thoregon kernel: [ 2160.598859]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.598867]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:27:23 thoregon kernel: [ 2160.598874]  [<ffffffff812116b6>] xfs_da_read_buf+0xb6/0x180
Nov 20 19:27:23 thoregon kernel: [ 2160.598884]  [<ffffffff8121724c>] xfs_dir2_leaf_lookup_int+0x5c/0x2c0
Nov 20 19:27:23 thoregon kernel: [ 2160.598893]  [<ffffffff81217583>] xfs_dir2_leaf_removename+0x33/0x330
Nov 20 19:27:23 thoregon kernel: [ 2160.598901]  [<ffffffff8120777b>] ? xfs_bmap_last_offset+0x8b/0xc0
Nov 20 19:27:23 thoregon kernel: [ 2160.598910]  [<ffffffff81213b94>] xfs_dir_removename+0x134/0x150
Nov 20 19:27:23 thoregon kernel: [ 2160.598918]  [<ffffffff811f5ce6>] xfs_remove+0x1f6/0x370
Nov 20 19:27:23 thoregon kernel: [ 2160.598927]  [<ffffffff816af4ac>] ? mutex_lock_nested+0x26c/0x330
Nov 20 19:27:23 thoregon kernel: [ 2160.598935]  [<ffffffff810ff48f>] ? vfs_unlink+0x4f/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.598944]  [<ffffffff811ec7f3>] xfs_vn_unlink+0x43/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.598952]  [<ffffffff810ff4cd>] vfs_unlink+0x8d/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.598961]  [<ffffffff810ff63a>] do_unlinkat+0xfa/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.598969]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.598977]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:27:23 thoregon kernel: [ 2160.598986]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:27:23 thoregon kernel: [ 2160.598993]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:27:23 thoregon kernel: [ 2160.598998] 6 locks held by x86_64-pc-linux/26907:
Nov 20 19:27:23 thoregon kernel: [ 2160.599001]  #0:  (sb_writers#6){.+.+.+}, at: [<ffffffff811135ef>] mnt_want_write+0x1f/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.599016]  #1:  (&type->i_mutex_dir_key#3/1){+.+.+.}, at: [<ffffffff810ff5e3>] do_unlinkat+0xa3/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.599033]  #2:  (&sb->s_type->i_mutex_key#7){+.+.+.}, at: [<ffffffff810ff48f>] vfs_unlink+0x4f/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.599048]  #3:  (sb_internal){.+.+.?}, at: [<ffffffff8122b3e8>] xfs_trans_alloc+0x28/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.599062]  #4:  (&(&ip->i_lock)->mr_lock/4){+.+...}, at: [<ffffffff811e8414>] xfs_ilock+0x84/0xb0
Nov 20 19:27:23 thoregon kernel: [ 2160.599076]  #5:  (&(&ip->i_lock)->mr_lock){++++--}, at: [<ffffffff811e854c>] xfs_ilock_nowait+0xbc/0x100
Nov 20 19:27:41 thoregon login[3002]: pam_unix(login:session): session closed for user root
Nov 20 19:27:43 thoregon acpid: client connected from 2991[0:0]
Nov 20 19:27:43 thoregon acpid: 1 client rule loaded
Nov 20 19:29:23 thoregon kernel: [ 2280.601023] INFO: task kswapd0:725 blocked for more than 120 seconds.
Nov 20 19:29:23 thoregon kernel: [ 2280.601030] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 19:29:23 thoregon kernel: [ 2280.601036] kswapd0         D 0000000000000001     0   725      2 0x00000000
Nov 20 19:29:23 thoregon kernel: [ 2280.601048]  ffff8803280d13f8 0000000000000046 ffff880329a0ab80 ffff8803280d1fd8
Nov 20 19:29:23 thoregon kernel: [ 2280.601063]  ffff8803280d1fd8 ffff8803280d1fd8 ffff880046b7c880 ffff880329a0ab80
Nov 20 19:29:23 thoregon kernel: [ 2280.601073]  ffff8803280d1408 ffff8803278dbbd0 ffff8803278db800 00000000ffffffff
Nov 20 19:29:23 thoregon kernel: [ 2280.601084] Call Trace:
Nov 20 19:29:23 thoregon kernel: [ 2280.601101]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:29:23 thoregon kernel: [ 2280.601111]  [<ffffffff814f9dad>] md_super_wait+0x4d/0x80
Nov 20 19:29:23 thoregon kernel: [ 2280.601121]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:29:23 thoregon kernel: [ 2280.601129]  [<ffffffff81500753>] bitmap_unplug+0x173/0x180
Nov 20 19:29:23 thoregon kernel: [ 2280.601138]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:29:23 thoregon kernel: [ 2280.601147]  [<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
Nov 20 19:29:23 thoregon kernel: [ 2280.601157]  [<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
Nov 20 19:29:23 thoregon kernel: [ 2280.601165]  [<ffffffff816b24ca>] ? _raw_spin_unlock_irqrestore+0x3a/0x70
Nov 20 19:29:23 thoregon kernel: [ 2280.601175]  [<ffffffff816b15c3>] io_schedule_timeout+0x83/0xf0
Nov 20 19:29:23 thoregon kernel: [ 2280.601182]  [<ffffffff810815cd>] ? trace_hardirqs_on+0xd/0x10
Nov 20 19:29:23 thoregon kernel: [ 2280.601192]  [<ffffffff810b0e1d>] mempool_alloc+0x12d/0x160
Nov 20 19:29:23 thoregon kernel: [ 2280.601200]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:29:23 thoregon kernel: [ 2280.601209]  [<ffffffff811263da>] bvec_alloc_bs+0xda/0x100
Nov 20 19:29:23 thoregon kernel: [ 2280.601217]  [<ffffffff811264ea>] bio_alloc_bioset+0xea/0x110
Nov 20 19:29:23 thoregon kernel: [ 2280.601226]  [<ffffffff81126656>] bio_clone_bioset+0x16/0x40
Nov 20 19:29:23 thoregon kernel: [ 2280.601233]  [<ffffffff814f471a>] bio_clone_mddev+0x1a/0x30
Nov 20 19:29:23 thoregon kernel: [ 2280.601241]  [<ffffffff814edbb1>] make_request+0x551/0xde0
Nov 20 19:29:23 thoregon kernel: [ 2280.601249]  [<ffffffff814ed9e0>] ? make_request+0x380/0xde0
Nov 20 19:29:23 thoregon kernel: [ 2280.601257]  [<ffffffff810b0b80>] ? mempool_alloc_slab+0x10/0x20
Nov 20 19:29:23 thoregon kernel: [ 2280.601266]  [<ffffffff814f80bb>] md_make_request+0x21b/0x4d0
Nov 20 19:29:23 thoregon kernel: [ 2280.601273]  [<ffffffff814f7ef0>] ? md_make_request+0x50/0x4d0
Nov 20 19:29:23 thoregon kernel: [ 2280.601282]  [<ffffffff81276e52>] generic_make_request+0xc2/0x100
Nov 20 19:29:23 thoregon kernel: [ 2280.601291]  [<ffffffff81276ef5>] submit_bio+0x65/0x110
Nov 20 19:29:23 thoregon kernel: [ 2280.601301]  [<ffffffff811e07bf>] xfs_submit_ioend_bio.isra.21+0x2f/0x40
Nov 20 19:29:23 thoregon kernel: [ 2280.601309]  [<ffffffff811e088e>] xfs_submit_ioend+0xbe/0x110
Nov 20 19:29:23 thoregon kernel: [ 2280.601318]  [<ffffffff811e0c91>] xfs_vm_writepage+0x3b1/0x540
Nov 20 19:29:23 thoregon kernel: [ 2280.601328]  [<ffffffff810bcd84>] shrink_page_list+0x564/0x890
Nov 20 19:29:23 thoregon kernel: [ 2280.601338]  [<ffffffff810bd637>] shrink_inactive_list+0x1d7/0x310
Nov 20 19:29:23 thoregon kernel: [ 2280.601347]  [<ffffffff810bdb9d>] shrink_lruvec+0x42d/0x530
Nov 20 19:29:23 thoregon kernel: [ 2280.601356]  [<ffffffff810be323>] kswapd+0x683/0xa20
Nov 20 19:29:23 thoregon kernel: [ 2280.601365]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:29:23 thoregon kernel: [ 2280.601374]  [<ffffffff810bdca0>] ? shrink_lruvec+0x530/0x530
Nov 20 19:29:23 thoregon kernel: [ 2280.601381]  [<ffffffff8105c246>] kthread+0xd6/0xe0
Nov 20 19:29:23 thoregon kernel: [ 2280.601388]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:29:23 thoregon kernel: [ 2280.601397]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:29:23 thoregon kernel: [ 2280.601404]  [<ffffffff816b31ac>] ret_from_fork+0x7c/0xb0
Nov 20 19:29:23 thoregon kernel: [ 2280.601412]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:29:23 thoregon kernel: [ 2280.601417] no locks held by kswapd0/725.
Nov 20 19:30:01 thoregon cron[27484]: (root) CMD (test -x /usr/sbin/run-crons && /usr/sbin/run-crons)
Nov 20 19:35:16 thoregon kernel: [ 2633.536440] SysRq : Show Blocked State
Nov 20 19:35:16 thoregon kernel: [ 2633.536453]   task                        PC stack   pid father
Nov 20 19:35:16 thoregon kernel: [ 2633.536485] kswapd0         D 0000000000000001     0   725      2 0x00000000
Nov 20 19:35:16 thoregon kernel: [ 2633.536496]  ffff8803280d13f8 0000000000000046 ffff880329a0ab80 ffff8803280d1fd8
Nov 20 19:35:16 thoregon kernel: [ 2633.536507]  ffff8803280d1fd8 ffff8803280d1fd8 ffff880046b7c880 ffff880329a0ab80
Nov 20 19:35:16 thoregon kernel: [ 2633.536517]  ffff8803280d1408 ffff8803278dbbd0 ffff8803278db800 00000000ffffffff
Nov 20 19:35:16 thoregon kernel: [ 2633.536526] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.536542]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.536553]  [<ffffffff814f9dad>] md_super_wait+0x4d/0x80
Nov 20 19:35:16 thoregon kernel: [ 2633.536563]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.536571]  [<ffffffff81500753>] bitmap_unplug+0x173/0x180
Nov 20 19:35:16 thoregon kernel: [ 2633.536581]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.536589]  [<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
Nov 20 19:35:16 thoregon kernel: [ 2633.536599]  [<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
Nov 20 19:35:16 thoregon kernel: [ 2633.536607]  [<ffffffff816b24ca>] ? _raw_spin_unlock_irqrestore+0x3a/0x70
Nov 20 19:35:16 thoregon kernel: [ 2633.536616]  [<ffffffff816b15c3>] io_schedule_timeout+0x83/0xf0
Nov 20 19:35:16 thoregon kernel: [ 2633.536623]  [<ffffffff810815cd>] ? trace_hardirqs_on+0xd/0x10
Nov 20 19:35:16 thoregon kernel: [ 2633.536633]  [<ffffffff810b0e1d>] mempool_alloc+0x12d/0x160
Nov 20 19:35:16 thoregon kernel: [ 2633.536641]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.536650]  [<ffffffff811263da>] bvec_alloc_bs+0xda/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.536658]  [<ffffffff811264ea>] bio_alloc_bioset+0xea/0x110
Nov 20 19:35:16 thoregon kernel: [ 2633.536666]  [<ffffffff81126656>] bio_clone_bioset+0x16/0x40
Nov 20 19:35:16 thoregon kernel: [ 2633.536673]  [<ffffffff814f471a>] bio_clone_mddev+0x1a/0x30
Nov 20 19:35:16 thoregon kernel: [ 2633.536681]  [<ffffffff814edbb1>] make_request+0x551/0xde0
Nov 20 19:35:16 thoregon kernel: [ 2633.536689]  [<ffffffff814ed9e0>] ? make_request+0x380/0xde0
Nov 20 19:35:16 thoregon kernel: [ 2633.536697]  [<ffffffff810b0b80>] ? mempool_alloc_slab+0x10/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.536706]  [<ffffffff814f80bb>] md_make_request+0x21b/0x4d0
Nov 20 19:35:16 thoregon kernel: [ 2633.536712]  [<ffffffff814f7ef0>] ? md_make_request+0x50/0x4d0
Nov 20 19:35:16 thoregon kernel: [ 2633.536722]  [<ffffffff81276e52>] generic_make_request+0xc2/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.536730]  [<ffffffff81276ef5>] submit_bio+0x65/0x110
Nov 20 19:35:16 thoregon kernel: [ 2633.536740]  [<ffffffff811e07bf>] xfs_submit_ioend_bio.isra.21+0x2f/0x40
Nov 20 19:35:16 thoregon kernel: [ 2633.536748]  [<ffffffff811e088e>] xfs_submit_ioend+0xbe/0x110
Nov 20 19:35:16 thoregon kernel: [ 2633.536757]  [<ffffffff811e0c91>] xfs_vm_writepage+0x3b1/0x540
Nov 20 19:35:16 thoregon kernel: [ 2633.536767]  [<ffffffff810bcd84>] shrink_page_list+0x564/0x890
Nov 20 19:35:16 thoregon kernel: [ 2633.536777]  [<ffffffff810bd637>] shrink_inactive_list+0x1d7/0x310
Nov 20 19:35:16 thoregon kernel: [ 2633.536785]  [<ffffffff810bdb9d>] shrink_lruvec+0x42d/0x530
Nov 20 19:35:16 thoregon kernel: [ 2633.536795]  [<ffffffff810be323>] kswapd+0x683/0xa20
Nov 20 19:35:16 thoregon kernel: [ 2633.536804]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.536812]  [<ffffffff810bdca0>] ? shrink_lruvec+0x530/0x530
Nov 20 19:35:16 thoregon kernel: [ 2633.536819]  [<ffffffff8105c246>] kthread+0xd6/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.536826]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.536835]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.536842]  [<ffffffff816b31ac>] ret_from_fork+0x7c/0xb0
Nov 20 19:35:16 thoregon kernel: [ 2633.536849]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.536890] xfsaild/md4     D 0000000000000003     0  1742      2 0x00000000
Nov 20 19:35:16 thoregon kernel: [ 2633.536898]  ffff88032438bb68 0000000000000046 ffff880329965700 ffff88032438bfd8
Nov 20 19:35:16 thoregon kernel: [ 2633.536908]  ffff88032438bfd8 ffff88032438bfd8 ffff88032827e580 ffff880329965700
Nov 20 19:35:16 thoregon kernel: [ 2633.536917]  ffff88032438bb78 ffff8803278dbbd0 ffff8803278db800 00000000ffffffff
Nov 20 19:35:16 thoregon kernel: [ 2633.536926] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.536935]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.536942]  [<ffffffff814f9dad>] md_super_wait+0x4d/0x80
Nov 20 19:35:16 thoregon kernel: [ 2633.536950]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.536957]  [<ffffffff81500753>] bitmap_unplug+0x173/0x180
Nov 20 19:35:16 thoregon kernel: [ 2633.536966]  [<ffffffff81278c13>] ? blk_finish_plug+0x13/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.536974]  [<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
Nov 20 19:35:16 thoregon kernel: [ 2633.536982]  [<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
Nov 20 19:35:16 thoregon kernel: [ 2633.536991]  [<ffffffff81278c13>] blk_finish_plug+0x13/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.536998]  [<ffffffff811e296a>] __xfs_buf_delwri_submit+0x1ca/0x1e0
Nov 20 19:35:16 thoregon kernel: [ 2633.537006]  [<ffffffff811e2ffb>] xfs_buf_delwri_submit_nowait+0x1b/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.537014]  [<ffffffff81233066>] xfsaild+0x226/0x4c0
Nov 20 19:35:16 thoregon kernel: [ 2633.537022]  [<ffffffff81065dfa>] ? finish_task_switch+0x3a/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.537030]  [<ffffffff81232e40>] ? xfs_trans_ail_cursor_first+0xa0/0xa0
Nov 20 19:35:16 thoregon kernel: [ 2633.537037]  [<ffffffff8105c246>] kthread+0xd6/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.537044]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.537053]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.537060]  [<ffffffff816b31ac>] ret_from_fork+0x7c/0xb0
Nov 20 19:35:16 thoregon kernel: [ 2633.537067]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.537158] plasma-desktop  D 0000000000000001     0  3195      1 0x00000000
Nov 20 19:35:16 thoregon kernel: [ 2633.537165]  ffff88030b99d448 0000000000000046 ffff8803266c1d00 ffff88030b99dfd8
Nov 20 19:35:16 thoregon kernel: [ 2633.537174]  ffff88030b99dfd8 ffff88030b99dfd8 ffff880046b78e80 ffff8803266c1d00
Nov 20 19:35:16 thoregon kernel: [ 2633.537183]  0000000000000282 ffff88030b99d468 0000000100038f76 0000000100038f74
Nov 20 19:35:16 thoregon kernel: [ 2633.537192] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.537201]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.537210]  [<ffffffff816aed8e>] schedule_timeout+0x12e/0x1c0
Nov 20 19:35:16 thoregon kernel: [ 2633.537219]  [<ffffffff81048730>] ? call_timer_fn+0xf0/0xf0
Nov 20 19:35:16 thoregon kernel: [ 2633.537228]  [<ffffffff816aee39>] schedule_timeout_uninterruptible+0x19/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.537235]  [<ffffffff811f1f99>] xfs_reclaim_inode+0x129/0x340
Nov 20 19:35:16 thoregon kernel: [ 2633.537242]  [<ffffffff811f2d6f>] xfs_reclaim_inodes_ag+0x2bf/0x4f0
Nov 20 19:35:16 thoregon kernel: [ 2633.537249]  [<ffffffff811f2b90>] ? xfs_reclaim_inodes_ag+0xe0/0x4f0
Nov 20 19:35:16 thoregon kernel: [ 2633.537258]  [<ffffffff811f30de>] xfs_reclaim_inodes_nr+0x2e/0x40
Nov 20 19:35:16 thoregon kernel: [ 2633.537267]  [<ffffffff811ef9b0>] xfs_fs_free_cached_objects+0x10/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.537274]  [<ffffffff810f5b43>] prune_super+0x113/0x1a0
Nov 20 19:35:16 thoregon kernel: [ 2633.537282]  [<ffffffff810bbd4e>] shrink_slab+0x11e/0x1f0
Nov 20 19:35:16 thoregon kernel: [ 2633.537290]  [<ffffffff810b4fa5>] ? drain_pages+0x95/0xc0
Nov 20 19:35:16 thoregon kernel: [ 2633.537298]  [<ffffffff810be8df>] try_to_free_pages+0x21f/0x4e0
Nov 20 19:35:16 thoregon kernel: [ 2633.537306]  [<ffffffff810b5eb6>] __alloc_pages_nodemask+0x506/0x800
Nov 20 19:35:16 thoregon kernel: [ 2633.537316]  [<ffffffff81570ed3>] ? __alloc_skb+0x83/0x280
Nov 20 19:35:16 thoregon kernel: [ 2633.537323]  [<ffffffff810b6232>] __get_free_pages+0x12/0x40
Nov 20 19:35:16 thoregon kernel: [ 2633.537331]  [<ffffffff810ebd8c>] __kmalloc_track_caller+0x14c/0x170
Nov 20 19:35:16 thoregon kernel: [ 2633.537339]  [<ffffffff815703e6>] __kmalloc_reserve+0x36/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.537346]  [<ffffffff81570ed3>] __alloc_skb+0x83/0x280
Nov 20 19:35:16 thoregon kernel: [ 2633.537354]  [<ffffffff8105c905>] ? remove_wait_queue+0x55/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.537361]  [<ffffffff81569fb1>] sock_alloc_send_pskb+0x251/0x3e0
Nov 20 19:35:16 thoregon kernel: [ 2633.537369]  [<ffffffff8156a150>] sock_alloc_send_skb+0x10/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.537376]  [<ffffffff81615f21>] unix_stream_sendmsg+0x2e1/0x420
Nov 20 19:35:16 thoregon kernel: [ 2633.537385]  [<ffffffff81564900>] ? sock_aio_read+0x30/0x30
Nov 20 19:35:16 thoregon kernel: [ 2633.537393]  [<ffffffff81564a0c>] sock_aio_write+0x10c/0x140
Nov 20 19:35:16 thoregon kernel: [ 2633.537403]  [<ffffffff810f2eeb>] do_sync_readv_writev+0x9b/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.537413]  [<ffffffff810f31a8>] do_readv_writev+0xc8/0x1c0
Nov 20 19:35:16 thoregon kernel: [ 2633.537421]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.537428]  [<ffffffff8107c899>] ? __lock_is_held+0x59/0x70
Nov 20 19:35:16 thoregon kernel: [ 2633.537438]  [<ffffffff8110f892>] ? fget_light+0x42/0x4b0
Nov 20 19:35:16 thoregon kernel: [ 2633.537446]  [<ffffffff810f32d7>] vfs_writev+0x37/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.537454]  [<ffffffff810f341d>] sys_writev+0x4d/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.537462]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:35:16 thoregon kernel: [ 2633.537518] ssh             D 0000000000000000     0  3573      1 0x00000004
Nov 20 19:35:16 thoregon kernel: [ 2633.537525]  ffff8802c9f2f508 0000000000000046 ffff8802f04dd700 ffff8802c9f2ffd8
Nov 20 19:35:16 thoregon kernel: [ 2633.537534]  ffff8802c9f2ffd8 ffff8802c9f2ffd8 ffffffff81c13420 ffff8802f04dd700
Nov 20 19:35:16 thoregon kernel: [ 2633.537543]  0000000000000282 ffff8802c9f2f528 0000000100038f75 0000000100038f73
Nov 20 19:35:16 thoregon kernel: [ 2633.537552] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.537561]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.537569]  [<ffffffff816aed8e>] schedule_timeout+0x12e/0x1c0
Nov 20 19:35:16 thoregon kernel: [ 2633.537578]  [<ffffffff81048730>] ? call_timer_fn+0xf0/0xf0
Nov 20 19:35:16 thoregon kernel: [ 2633.537587]  [<ffffffff816aee39>] schedule_timeout_uninterruptible+0x19/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.537594]  [<ffffffff811f1f99>] xfs_reclaim_inode+0x129/0x340
Nov 20 19:35:16 thoregon kernel: [ 2633.537601]  [<ffffffff811f2d6f>] xfs_reclaim_inodes_ag+0x2bf/0x4f0
Nov 20 19:35:16 thoregon kernel: [ 2633.537607]  [<ffffffff811f2b90>] ? xfs_reclaim_inodes_ag+0xe0/0x4f0
Nov 20 19:35:16 thoregon kernel: [ 2633.537616]  [<ffffffff811f30de>] xfs_reclaim_inodes_nr+0x2e/0x40
Nov 20 19:35:16 thoregon kernel: [ 2633.537625]  [<ffffffff811ef9b0>] xfs_fs_free_cached_objects+0x10/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.537631]  [<ffffffff810f5b43>] prune_super+0x113/0x1a0
Nov 20 19:35:16 thoregon kernel: [ 2633.537639]  [<ffffffff810bbd4e>] shrink_slab+0x11e/0x1f0
Nov 20 19:35:16 thoregon kernel: [ 2633.537647]  [<ffffffff810b4fa5>] ? drain_pages+0x95/0xc0
Nov 20 19:35:16 thoregon kernel: [ 2633.537655]  [<ffffffff810be8df>] try_to_free_pages+0x21f/0x4e0
Nov 20 19:35:16 thoregon kernel: [ 2633.537663]  [<ffffffff810b5eb6>] __alloc_pages_nodemask+0x506/0x800
Nov 20 19:35:16 thoregon kernel: [ 2633.537670]  [<ffffffff81104d10>] ? poll_select_set_timeout+0x90/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.537679]  [<ffffffff81570ed3>] ? __alloc_skb+0x83/0x280
Nov 20 19:35:16 thoregon kernel: [ 2633.537686]  [<ffffffff810b6232>] __get_free_pages+0x12/0x40
Nov 20 19:35:16 thoregon kernel: [ 2633.537693]  [<ffffffff810ebd8c>] __kmalloc_track_caller+0x14c/0x170
Nov 20 19:35:16 thoregon kernel: [ 2633.537701]  [<ffffffff815703e6>] __kmalloc_reserve+0x36/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.537708]  [<ffffffff81570ed3>] __alloc_skb+0x83/0x280
Nov 20 19:35:16 thoregon kernel: [ 2633.537715]  [<ffffffff81569ac3>] ? release_sock+0x183/0x1e0
Nov 20 19:35:16 thoregon kernel: [ 2633.537722]  [<ffffffff81569fb1>] sock_alloc_send_pskb+0x251/0x3e0
Nov 20 19:35:16 thoregon kernel: [ 2633.537729]  [<ffffffff8156a150>] sock_alloc_send_skb+0x10/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.537736]  [<ffffffff81615f21>] unix_stream_sendmsg+0x2e1/0x420
Nov 20 19:35:16 thoregon kernel: [ 2633.537745]  [<ffffffff81564a0c>] sock_aio_write+0x10c/0x140
Nov 20 19:35:16 thoregon kernel: [ 2633.537753]  [<ffffffff810cb5ae>] ? might_fault+0x4e/0xa0
Nov 20 19:35:16 thoregon kernel: [ 2633.537760]  [<ffffffff811059aa>] ? core_sys_select+0x47a/0x4d0
Nov 20 19:35:16 thoregon kernel: [ 2633.537769]  [<ffffffff810f238b>] do_sync_write+0x9b/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.537778]  [<ffffffff810f2a75>] vfs_write+0x145/0x160
Nov 20 19:35:16 thoregon kernel: [ 2633.537787]  [<ffffffff810f2ccd>] sys_write+0x4d/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.537794]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:35:16 thoregon kernel: [ 2633.537801] bash            D 0000000000000000     0 28378  28334 0x00000000
Nov 20 19:35:16 thoregon kernel: [ 2633.537809]  ffff880299f5d5d8 0000000000000046 ffff880326a83a00 ffff880299f5dfd8
Nov 20 19:35:16 thoregon kernel: [ 2633.537818]  ffff880299f5dfd8 ffff880299f5dfd8 ffff88032989ba00 ffff880326a83a00
Nov 20 19:35:16 thoregon kernel: [ 2633.537827]  ffff880299f5d6a8 ffff880299c50230 7fffffffffffffff ffff880326a83a00
Nov 20 19:35:16 thoregon kernel: [ 2633.537836] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.537845]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.537853]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:35:16 thoregon kernel: [ 2633.537860]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.537868]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.537876]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:35:16 thoregon kernel: [ 2633.537885]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:35:16 thoregon kernel: [ 2633.537891]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.537898]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.537904]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:35:16 thoregon kernel: [ 2633.537911]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:35:16 thoregon kernel: [ 2633.537918]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.537925]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:35:16 thoregon kernel: [ 2633.537933]  [<ffffffff811f73fb>] xfs_alloc_read_agfl+0x6b/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.537941]  [<ffffffff811f986d>] ? xfs_alloc_read_agf+0xcd/0xf0
Nov 20 19:35:16 thoregon kernel: [ 2633.537948]  [<ffffffff810e9a93>] ? kmem_cache_free+0x83/0x150
Nov 20 19:35:16 thoregon kernel: [ 2633.537955]  [<ffffffff811f9ac1>] xfs_alloc_fix_freelist+0x231/0x480
Nov 20 19:35:16 thoregon kernel: [ 2633.537962]  [<ffffffff8107c899>] ? __lock_is_held+0x59/0x70
Nov 20 19:35:16 thoregon kernel: [ 2633.537971]  [<ffffffff811fa279>] xfs_free_extent+0x99/0x120
Nov 20 19:35:16 thoregon kernel: [ 2633.537978]  [<ffffffff811f6fff>] ? kmem_zone_alloc+0x5f/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.537987]  [<ffffffff81206edc>] xfs_bmap_finish+0x15c/0x1a0
Nov 20 19:35:16 thoregon kernel: [ 2633.537996]  [<ffffffff8121f9eb>] xfs_itruncate_extents+0xdb/0x200
Nov 20 19:35:16 thoregon kernel: [ 2633.538006]  [<ffffffff811ed7d5>] xfs_setattr_size+0x355/0x3b0
Nov 20 19:35:16 thoregon kernel: [ 2633.538014]  [<ffffffff811ed85e>] xfs_vn_setattr+0x2e/0x40
Nov 20 19:35:16 thoregon kernel: [ 2633.538022]  [<ffffffff8110dfa6>] notify_change+0x186/0x3a0
Nov 20 19:35:16 thoregon kernel: [ 2633.538030]  [<ffffffff810f1229>] do_truncate+0x59/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.538039]  [<ffffffff810fd777>] ? __inode_permission+0x27/0x80
Nov 20 19:35:16 thoregon kernel: [ 2633.538048]  [<ffffffff811008da>] do_last.isra.69+0x60a/0xc80
Nov 20 19:35:16 thoregon kernel: [ 2633.538056]  [<ffffffff810fd7e3>] ? inode_permission+0x13/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.538064]  [<ffffffff810fd882>] ? link_path_walk+0x62/0x910
Nov 20 19:35:16 thoregon kernel: [ 2633.538073]  [<ffffffff81100ffb>] path_openat.isra.70+0xab/0x490
Nov 20 19:35:16 thoregon kernel: [ 2633.538080]  [<ffffffff8107c899>] ? __lock_is_held+0x59/0x70
Nov 20 19:35:16 thoregon kernel: [ 2633.538089]  [<ffffffff8110179d>] do_filp_open+0x3d/0xa0
Nov 20 19:35:16 thoregon kernel: [ 2633.538097]  [<ffffffff8111082b>] ? __alloc_fd+0x16b/0x210
Nov 20 19:35:16 thoregon kernel: [ 2633.538106]  [<ffffffff810f2089>] do_sys_open+0xf9/0x1e0
Nov 20 19:35:16 thoregon kernel: [ 2633.538114]  [<ffffffff810f218c>] sys_open+0x1c/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.538121]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:35:16 thoregon kernel: [ 2633.538126] flush-9:4       D 0000000000000001     0 30365      2 0x00000000
Nov 20 19:35:16 thoregon kernel: [ 2633.538134]  ffff8801d8ce1978 0000000000000046 ffff880328238e80 ffff8801d8ce1fd8
Nov 20 19:35:16 thoregon kernel: [ 2633.538143]  ffff8801d8ce1fd8 ffff8801d8ce1fd8 ffff8803032cc880 ffff880328238e80
Nov 20 19:35:16 thoregon kernel: [ 2633.538151]  ffff8801d8ce1988 ffff8803278dbbd0 ffff8803278db800 00000000ffffffff
Nov 20 19:35:16 thoregon kernel: [ 2633.538160] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.538169]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538176]  [<ffffffff814f9dad>] md_super_wait+0x4d/0x80
Nov 20 19:35:16 thoregon kernel: [ 2633.538184]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538192]  [<ffffffff81500753>] bitmap_unplug+0x173/0x180
Nov 20 19:35:16 thoregon kernel: [ 2633.538199]  [<ffffffff810b6acf>] ? write_cache_pages+0x12f/0x420
Nov 20 19:35:16 thoregon kernel: [ 2633.538206]  [<ffffffff810b6700>] ? set_page_dirty_lock+0x60/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538214]  [<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
Nov 20 19:35:16 thoregon kernel: [ 2633.538222]  [<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
Nov 20 19:35:16 thoregon kernel: [ 2633.538231]  [<ffffffff81278c13>] blk_finish_plug+0x13/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.538238]  [<ffffffff810b6e0a>] generic_writepages+0x4a/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538247]  [<ffffffff811df898>] xfs_vm_writepages+0x48/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538256]  [<ffffffff8111b144>] ? writeback_sb_inodes+0x174/0x410
Nov 20 19:35:16 thoregon kernel: [ 2633.538264]  [<ffffffff810b84be>] do_writepages+0x1e/0x30
Nov 20 19:35:16 thoregon kernel: [ 2633.538272]  [<ffffffff8111a1ee>] __writeback_single_inode+0x3e/0x120
Nov 20 19:35:16 thoregon kernel: [ 2633.538280]  [<ffffffff8111b234>] writeback_sb_inodes+0x264/0x410
Nov 20 19:35:16 thoregon kernel: [ 2633.538289]  [<ffffffff8111b476>] __writeback_inodes_wb+0x96/0xc0
Nov 20 19:35:16 thoregon kernel: [ 2633.538298]  [<ffffffff8111b673>] wb_writeback+0x1d3/0x1e0
Nov 20 19:35:16 thoregon kernel: [ 2633.538305]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.538313]  [<ffffffff8111bb1c>] ? wb_do_writeback+0x7c/0x140
Nov 20 19:35:16 thoregon kernel: [ 2633.538322]  [<ffffffff8111bb40>] wb_do_writeback+0xa0/0x140
Nov 20 19:35:16 thoregon kernel: [ 2633.538331]  [<ffffffff8111bc52>] bdi_writeback_thread+0x72/0x150
Nov 20 19:35:16 thoregon kernel: [ 2633.538339]  [<ffffffff8111bbe0>] ? wb_do_writeback+0x140/0x140
Nov 20 19:35:16 thoregon kernel: [ 2633.538346]  [<ffffffff8105c246>] kthread+0xd6/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.538353]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.538361]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.538369]  [<ffffffff816b31ac>] ret_from_fork+0x7c/0xb0
Nov 20 19:35:16 thoregon kernel: [ 2633.538376]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.538384] x86_64-pc-linux D 0000000000000000     0 26516      1 0x00000004
Nov 20 19:35:16 thoregon kernel: [ 2633.538391]  ffff8800468dddf8 0000000000000046 ffff880329303a00 ffff8800468ddfd8
Nov 20 19:35:16 thoregon kernel: [ 2633.538400]  ffff8800468ddfd8 ffff8800468ddfd8 ffff88032989c880 ffff880329303a00
Nov 20 19:35:16 thoregon kernel: [ 2633.538409]  0000000000000246 ffff8800468dc000 ffff8801b1a1bc68 0000000002405150
Nov 20 19:35:16 thoregon kernel: [ 2633.538418] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.538427]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538436]  [<ffffffff816b1501>] schedule_preempt_disabled+0x21/0x30
Nov 20 19:35:16 thoregon kernel: [ 2633.538444]  [<ffffffff816af3a5>] mutex_lock_nested+0x165/0x330
Nov 20 19:35:16 thoregon kernel: [ 2633.538452]  [<ffffffff810ff5e3>] ? do_unlinkat+0xa3/0x1b0
Nov 20 19:35:16 thoregon kernel: [ 2633.538461]  [<ffffffff810fefae>] ? filename_lookup.isra.57+0x2e/0x80
Nov 20 19:35:16 thoregon kernel: [ 2633.538469]  [<ffffffff810ff5e3>] do_unlinkat+0xa3/0x1b0
Nov 20 19:35:16 thoregon kernel: [ 2633.538477]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.538485]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:35:16 thoregon kernel: [ 2633.538494]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.538501]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:35:16 thoregon kernel: [ 2633.538505] x86_64-pc-linux D 0000000000000000     0 26523      1 0x00000004
Nov 20 19:35:16 thoregon kernel: [ 2633.538512]  ffff8800c0def858 0000000000000046 ffff88032986c880 ffff8800c0deffd8
Nov 20 19:35:16 thoregon kernel: [ 2633.538521]  ffff8800c0deffd8 ffff8800c0deffd8 ffff88032989e580 ffff88032986c880
Nov 20 19:35:16 thoregon kernel: [ 2633.538530]  ffff8800c0def928 ffff8801c57e3a30 7fffffffffffffff ffff88032986c880
Nov 20 19:35:16 thoregon kernel: [ 2633.538539] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.538548]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538556]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:35:16 thoregon kernel: [ 2633.538563]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.538571]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.538579]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:35:16 thoregon kernel: [ 2633.538588]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:35:16 thoregon kernel: [ 2633.538594]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.538601]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538607]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:35:16 thoregon kernel: [ 2633.538614]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:35:16 thoregon kernel: [ 2633.538621]  [<ffffffff81210417>] ? xfs_dabuf_map.isra.23+0xb7/0x2f0
Nov 20 19:35:16 thoregon kernel: [ 2633.538628]  [<ffffffff810b572e>] ? get_page_from_freelist+0x1de/0x460
Nov 20 19:35:16 thoregon kernel: [ 2633.538635]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.538642]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:35:16 thoregon kernel: [ 2633.538650]  [<ffffffff812116b6>] xfs_da_read_buf+0xb6/0x180
Nov 20 19:35:16 thoregon kernel: [ 2633.538659]  [<ffffffff8121724c>] xfs_dir2_leaf_lookup_int+0x5c/0x2c0
Nov 20 19:35:16 thoregon kernel: [ 2633.538668]  [<ffffffff81217583>] xfs_dir2_leaf_removename+0x33/0x330
Nov 20 19:35:16 thoregon kernel: [ 2633.538676]  [<ffffffff8120777b>] ? xfs_bmap_last_offset+0x8b/0xc0
Nov 20 19:35:16 thoregon kernel: [ 2633.538684]  [<ffffffff81213b94>] xfs_dir_removename+0x134/0x150
Nov 20 19:35:16 thoregon kernel: [ 2633.538692]  [<ffffffff811f5ce6>] xfs_remove+0x1f6/0x370
Nov 20 19:35:16 thoregon kernel: [ 2633.538701]  [<ffffffff816af4ac>] ? mutex_lock_nested+0x26c/0x330
Nov 20 19:35:16 thoregon kernel: [ 2633.538709]  [<ffffffff810ff48f>] ? vfs_unlink+0x4f/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.538717]  [<ffffffff811ec7f3>] xfs_vn_unlink+0x43/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.538725]  [<ffffffff810ff4cd>] vfs_unlink+0x8d/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.538733]  [<ffffffff810ff63a>] do_unlinkat+0xfa/0x1b0
Nov 20 19:35:16 thoregon kernel: [ 2633.538741]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.538748]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:35:16 thoregon kernel: [ 2633.538757]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.538764]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:35:16 thoregon kernel: [ 2633.538769] x86_64-pc-linux D 0000000000000000     0 26875      1 0x00000004
Nov 20 19:35:16 thoregon kernel: [ 2633.538776]  ffff8800c0c5ba08 0000000000000046 ffff8802f0594880 ffff8800c0c5bfd8
Nov 20 19:35:16 thoregon kernel: [ 2633.538785]  ffff8800c0c5bfd8 ffff8800c0c5bfd8 ffff88032989c880 ffff8802f0594880
Nov 20 19:35:16 thoregon kernel: [ 2633.538793]  0000000000000250 ffff880205a7aa30 7fffffffffffffff ffff8802f0594880
Nov 20 19:35:16 thoregon kernel: [ 2633.538802] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.538811]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538819]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:35:16 thoregon kernel: [ 2633.538826]  [<ffffffff811f70b4>] ? kmem_zone_zalloc+0x34/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.538833]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.538841]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.538849]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:35:16 thoregon kernel: [ 2633.538857]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:35:16 thoregon kernel: [ 2633.538864]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.538870]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538877]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:35:16 thoregon kernel: [ 2633.538883]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:35:16 thoregon kernel: [ 2633.538890]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.538897]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:35:16 thoregon kernel: [ 2633.538906]  [<ffffffff8121ce6f>] xfs_read_agi+0x7f/0x110
Nov 20 19:35:16 thoregon kernel: [ 2633.538915]  [<ffffffff8121fb55>] xfs_iunlink+0x45/0x150
Nov 20 19:35:16 thoregon kernel: [ 2633.538922]  [<ffffffff81041fc1>] ? current_fs_time+0x11/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.538929]  [<ffffffff81234352>] ? xfs_trans_ichgtime+0x22/0xa0
Nov 20 19:35:16 thoregon kernel: [ 2633.538936]  [<ffffffff811f3ea0>] xfs_droplink+0x50/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538943]  [<ffffffff811f5d77>] xfs_remove+0x287/0x370
Nov 20 19:35:16 thoregon kernel: [ 2633.538951]  [<ffffffff816af4ac>] ? mutex_lock_nested+0x26c/0x330
Nov 20 19:35:16 thoregon kernel: [ 2633.538959]  [<ffffffff810ff48f>] ? vfs_unlink+0x4f/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.538967]  [<ffffffff811ec7f3>] xfs_vn_unlink+0x43/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.538975]  [<ffffffff810ff4cd>] vfs_unlink+0x8d/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.538983]  [<ffffffff810ff63a>] do_unlinkat+0xfa/0x1b0
Nov 20 19:35:16 thoregon kernel: [ 2633.538991]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.538999]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:35:16 thoregon kernel: [ 2633.539007]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.539014]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:35:16 thoregon kernel: [ 2633.539020] x86_64-pc-linux D 0000000000000001     0 26897      1 0x00000004
Nov 20 19:35:16 thoregon kernel: [ 2633.539027]  ffff880101959858 0000000000000046 ffff88032934ab80 ffff880101959fd8
Nov 20 19:35:16 thoregon kernel: [ 2633.539036]  ffff880101959fd8 ffff880101959fd8 ffff880162a3e580 ffff88032934ab80
Nov 20 19:35:16 thoregon kernel: [ 2633.539045]  ffff880101959928 ffff8801ca545030 7fffffffffffffff ffff88032934ab80
Nov 20 19:35:16 thoregon kernel: [ 2633.539053] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.539062]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.539070]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:35:16 thoregon kernel: [ 2633.539077]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.539085]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.539094]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:35:16 thoregon kernel: [ 2633.539102]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:35:16 thoregon kernel: [ 2633.539108]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.539115]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.539121]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:35:16 thoregon kernel: [ 2633.539128]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:35:16 thoregon kernel: [ 2633.539135]  [<ffffffff81210417>] ? xfs_dabuf_map.isra.23+0xb7/0x2f0
Nov 20 19:35:16 thoregon kernel: [ 2633.539142]  [<ffffffff810b572e>] ? get_page_from_freelist+0x1de/0x460
Nov 20 19:35:16 thoregon kernel: [ 2633.539149]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.539156]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:35:16 thoregon kernel: [ 2633.539164]  [<ffffffff812116b6>] xfs_da_read_buf+0xb6/0x180
Nov 20 19:35:16 thoregon kernel: [ 2633.539172]  [<ffffffff8121724c>] xfs_dir2_leaf_lookup_int+0x5c/0x2c0
Nov 20 19:35:16 thoregon kernel: [ 2633.539181]  [<ffffffff81217583>] xfs_dir2_leaf_removename+0x33/0x330
Nov 20 19:35:16 thoregon kernel: [ 2633.539189]  [<ffffffff8120777b>] ? xfs_bmap_last_offset+0x8b/0xc0
Nov 20 19:35:16 thoregon kernel: [ 2633.539197]  [<ffffffff81213b94>] xfs_dir_removename+0x134/0x150
Nov 20 19:35:16 thoregon kernel: [ 2633.539205]  [<ffffffff811f5ce6>] xfs_remove+0x1f6/0x370
Nov 20 19:35:16 thoregon kernel: [ 2633.539214]  [<ffffffff816af4ac>] ? mutex_lock_nested+0x26c/0x330
Nov 20 19:35:16 thoregon kernel: [ 2633.539222]  [<ffffffff810ff48f>] ? vfs_unlink+0x4f/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.539230]  [<ffffffff811ec7f3>] xfs_vn_unlink+0x43/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.539238]  [<ffffffff810ff4cd>] vfs_unlink+0x8d/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.539246]  [<ffffffff810ff63a>] do_unlinkat+0xfa/0x1b0
Nov 20 19:35:16 thoregon kernel: [ 2633.539254]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.539261]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:35:16 thoregon kernel: [ 2633.539270]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.539277]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:35:16 thoregon kernel: [ 2633.539282] x86_64-pc-linux D 0000000000000001     0 26904      1 0x00000004
Nov 20 19:35:16 thoregon kernel: [ 2633.539290]  ffff880066bff858 0000000000000046 ffff880329301d00 ffff880066bfffd8
Nov 20 19:35:16 thoregon kernel: [ 2633.539299]  ffff880066bfffd8 ffff880066bfffd8 ffff88013a7ce580 ffff880329301d00
Nov 20 19:35:16 thoregon kernel: [ 2633.539307]  ffff880066bff928 ffff8801c572c830 7fffffffffffffff ffff880329301d00
Nov 20 19:35:16 thoregon kernel: [ 2633.539316] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.539325]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.539333]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:35:16 thoregon kernel: [ 2633.539340]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.539348]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.539356]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:35:16 thoregon kernel: [ 2633.539365]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:35:16 thoregon kernel: [ 2633.539371]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.539377]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.539384]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:35:16 thoregon kernel: [ 2633.539390]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:35:16 thoregon kernel: [ 2633.539398]  [<ffffffff81210417>] ? xfs_dabuf_map.isra.23+0xb7/0x2f0
Nov 20 19:35:16 thoregon kernel: [ 2633.539405]  [<ffffffff810b572e>] ? get_page_from_freelist+0x1de/0x460
Nov 20 19:35:16 thoregon kernel: [ 2633.539412]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.539419]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:35:16 thoregon kernel: [ 2633.539426]  [<ffffffff812116b6>] xfs_da_read_buf+0xb6/0x180
Nov 20 19:35:16 thoregon kernel: [ 2633.539435]  [<ffffffff8121724c>] xfs_dir2_leaf_lookup_int+0x5c/0x2c0
Nov 20 19:35:16 thoregon kernel: [ 2633.539444]  [<ffffffff81217583>] xfs_dir2_leaf_removename+0x33/0x330
Nov 20 19:35:16 thoregon kernel: [ 2633.539452]  [<ffffffff8120777b>] ? xfs_bmap_last_offset+0x8b/0xc0
Nov 20 19:35:16 thoregon kernel: [ 2633.539460]  [<ffffffff81213b94>] xfs_dir_removename+0x134/0x150
Nov 20 19:35:16 thoregon kernel: [ 2633.539468]  [<ffffffff811f5ce6>] xfs_remove+0x1f6/0x370
Nov 20 19:35:16 thoregon kernel: [ 2633.539477]  [<ffffffff816af4ac>] ? mutex_lock_nested+0x26c/0x330
Nov 20 19:35:16 thoregon kernel: [ 2633.539484]  [<ffffffff810ff48f>] ? vfs_unlink+0x4f/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.539493]  [<ffffffff811ec7f3>] xfs_vn_unlink+0x43/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.539501]  [<ffffffff810ff4cd>] vfs_unlink+0x8d/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.539509]  [<ffffffff810ff63a>] do_unlinkat+0xfa/0x1b0
Nov 20 19:35:16 thoregon kernel: [ 2633.539517]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.539524]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:35:16 thoregon kernel: [ 2633.539532]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.539540]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:35:16 thoregon kernel: [ 2633.539544] x86_64-pc-linux D 0000000000000000     0 26907      1 0x00000004
Nov 20 19:35:16 thoregon kernel: [ 2633.539552]  ffff8800c0eed858 0000000000000046 ffff8802a7be6580 ffff8800c0eedfd8
Nov 20 19:35:16 thoregon kernel: [ 2633.539560]  ffff8800c0eedfd8 ffff8800c0eedfd8 ffff88032989e580 ffff8802a7be6580
Nov 20 19:35:16 thoregon kernel: [ 2633.539569]  ffff8800c0eed928 ffff8801c55c7630 7fffffffffffffff ffff8802a7be6580
Nov 20 19:35:16 thoregon kernel: [ 2633.539578] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.539587]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.539595]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:35:16 thoregon kernel: [ 2633.539602]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.539610]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.539618]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:35:16 thoregon kernel: [ 2633.539627]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:35:16 thoregon kernel: [ 2633.539633]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.539639]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.539646]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:35:16 thoregon kernel: [ 2633.539652]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:35:16 thoregon kernel: [ 2633.539660]  [<ffffffff81210417>] ? xfs_dabuf_map.isra.23+0xb7/0x2f0
Nov 20 19:35:16 thoregon kernel: [ 2633.539667]  [<ffffffff810b572e>] ? get_page_from_freelist+0x1de/0x460
Nov 20 19:35:16 thoregon kernel: [ 2633.539674]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.539681]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:35:16 thoregon kernel: [ 2633.539688]  [<ffffffff812116b6>] xfs_da_read_buf+0xb6/0x180
Nov 20 19:35:16 thoregon kernel: [ 2633.539697]  [<ffffffff8121724c>] xfs_dir2_leaf_lookup_int+0x5c/0x2c0
Nov 20 19:35:16 thoregon kernel: [ 2633.539706]  [<ffffffff81217583>] xfs_dir2_leaf_removename+0x33/0x330
Nov 20 19:35:16 thoregon kernel: [ 2633.539714]  [<ffffffff8120777b>] ? xfs_bmap_last_offset+0x8b/0xc0
Nov 20 19:35:16 thoregon kernel: [ 2633.539722]  [<ffffffff81213b94>] xfs_dir_removename+0x134/0x150
Nov 20 19:35:16 thoregon kernel: [ 2633.539730]  [<ffffffff811f5ce6>] xfs_remove+0x1f6/0x370
Nov 20 19:35:16 thoregon kernel: [ 2633.539738]  [<ffffffff816af4ac>] ? mutex_lock_nested+0x26c/0x330
Nov 20 19:35:16 thoregon kernel: [ 2633.539746]  [<ffffffff810ff48f>] ? vfs_unlink+0x4f/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.539754]  [<ffffffff811ec7f3>] xfs_vn_unlink+0x43/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.539763]  [<ffffffff810ff4cd>] vfs_unlink+0x8d/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.539771]  [<ffffffff810ff63a>] do_unlinkat+0xfa/0x1b0
Nov 20 19:35:16 thoregon kernel: [ 2633.539779]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.539786]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:35:16 thoregon kernel: [ 2633.539794]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.539802]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
@ 2012-11-20 19:45                 ` Torsten Kaiser
  0 siblings, 0 replies; 31+ messages in thread
From: Torsten Kaiser @ 2012-11-20 19:45 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Linux Kernel, xfs

[-- Attachment #1: Type: text/plain, Size: 2768 bytes --]

On Tue, Nov 20, 2012 at 12:53 AM, Dave Chinner <david@fromorbit.com> wrote:
>        [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
>        [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
>        [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
>        [<ffffffff810dba31>] vm_map_ram+0x271/0x770
>        [<ffffffff811e1316>] _xfs_buf_map_pages+0x46/0xe0
>        [<ffffffff811e222a>] xfs_buf_get_map+0x8a/0x130
>        [<ffffffff81233ab9>] xfs_trans_get_buf_map+0xa9/0xd0
>        [<ffffffff8121bced>] xfs_ialloc_inode_init+0xcd/0x1d0
>
> We shouldn't be mapping buffers there, there's a patch below to fix
> this. It's probably the source of this report, even though I cannot
> lockdep seems to be off with the fairies...

That patch seems to break my system.
After it started to swap, because I was compiling seamonkey (firefox
turned into the full navigator suite) on a tmpfs, several processes
got stuck and triggered the hung-task-check.
As a kswapd, xfsaild/md4 and flush-9:4 also got stuck, not even a
shutdown worked.

The attached log first contains the hung-task-notices, then the output
from SysRq+W.

After the shutdown got stuck trying to turn of swap, I first tries
SysRq+S, but did not get a 'Done' and on SysRq+U lockdep complained
about an lock imbalance wrt. sb_writer. SysRq+O also did no longer
work, only SysRq+B.

I don't know which one got stuck first, but I'm somewhat suspicious of
the plasma-desktop and the sshd that SysRq+W reported stuck in xfs
reclaim, even if these processes did never trigger the hung task
check.

Torsten

> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
>
> xfs: inode allocation should use unmapped buffers.
>
> From: Dave Chinner <dchinner@redhat.com>
>
> Inode buffers do not need to be mapped as inodes are read or written
> directly from/to the pages underlying the buffer. This fixes a
> regression introduced by commit 611c994 ("xfs: make XBF_MAPPED the
> default behaviour").
>
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
>  fs/xfs/xfs_ialloc.c |    3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/fs/xfs/xfs_ialloc.c b/fs/xfs/xfs_ialloc.c
> index 2d6495e..a815412 100644
> --- a/fs/xfs/xfs_ialloc.c
> +++ b/fs/xfs/xfs_ialloc.c
> @@ -200,7 +200,8 @@ xfs_ialloc_inode_init(
>                  */
>                 d = XFS_AGB_TO_DADDR(mp, agno, agbno + (j * blks_per_cluster));
>                 fbuf = xfs_trans_get_buf(tp, mp->m_ddev_targp, d,
> -                                        mp->m_bsize * blks_per_cluster, 0);
> +                                        mp->m_bsize * blks_per_cluster,
> +                                        XBF_UNMAPPED);
>                 if (!fbuf)
>                         return ENOMEM;
>                 /*

[-- Attachment #2: xfs-reclaim-hang-messages.txt --]
[-- Type: text/plain, Size: 79805 bytes --]

Nov 20 19:24:55 thoregon acpid: client 2991[0:0] has disconnected
Nov 20 19:25:07 thoregon login[3002]: pam_unix(login:session): session opened for user root by LOGIN(uid=0)
Nov 20 19:25:07 thoregon login[27176]: ROOT LOGIN  on '/dev/tty1'
Nov 20 19:27:23 thoregon kernel: [ 2160.595990] INFO: task kswapd0:725 blocked for more than 120 seconds.
Nov 20 19:27:23 thoregon kernel: [ 2160.595998] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 19:27:23 thoregon kernel: [ 2160.596004] kswapd0         D 0000000000000001     0   725      2 0x00000000
Nov 20 19:27:23 thoregon kernel: [ 2160.596016]  ffff8803280d13f8 0000000000000046 ffff880329a0ab80 ffff8803280d1fd8
Nov 20 19:27:23 thoregon kernel: [ 2160.596030]  ffff8803280d1fd8 ffff8803280d1fd8 ffff880046b7c880 ffff880329a0ab80
Nov 20 19:27:23 thoregon kernel: [ 2160.596041]  ffff8803280d1408 ffff8803278dbbd0 ffff8803278db800 00000000ffffffff
Nov 20 19:27:23 thoregon kernel: [ 2160.596052] Call Trace:
Nov 20 19:27:23 thoregon kernel: [ 2160.596070]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596080]  [<ffffffff814f9dad>] md_super_wait+0x4d/0x80
Nov 20 19:27:23 thoregon kernel: [ 2160.596091]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596100]  [<ffffffff81500753>] bitmap_unplug+0x173/0x180
Nov 20 19:27:23 thoregon kernel: [ 2160.596109]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.596118]  [<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
Nov 20 19:27:23 thoregon kernel: [ 2160.596130]  [<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
Nov 20 19:27:23 thoregon kernel: [ 2160.596138]  [<ffffffff816b24ca>] ? _raw_spin_unlock_irqrestore+0x3a/0x70
Nov 20 19:27:23 thoregon kernel: [ 2160.596148]  [<ffffffff816b15c3>] io_schedule_timeout+0x83/0xf0
Nov 20 19:27:23 thoregon kernel: [ 2160.596156]  [<ffffffff810815cd>] ? trace_hardirqs_on+0xd/0x10
Nov 20 19:27:23 thoregon kernel: [ 2160.596166]  [<ffffffff810b0e1d>] mempool_alloc+0x12d/0x160
Nov 20 19:27:23 thoregon kernel: [ 2160.596174]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596184]  [<ffffffff811263da>] bvec_alloc_bs+0xda/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.596192]  [<ffffffff811264ea>] bio_alloc_bioset+0xea/0x110
Nov 20 19:27:23 thoregon kernel: [ 2160.596201]  [<ffffffff81126656>] bio_clone_bioset+0x16/0x40
Nov 20 19:27:23 thoregon kernel: [ 2160.596208]  [<ffffffff814f471a>] bio_clone_mddev+0x1a/0x30
Nov 20 19:27:23 thoregon kernel: [ 2160.596217]  [<ffffffff814edbb1>] make_request+0x551/0xde0
Nov 20 19:27:23 thoregon kernel: [ 2160.596224]  [<ffffffff814ed9e0>] ? make_request+0x380/0xde0
Nov 20 19:27:23 thoregon kernel: [ 2160.596233]  [<ffffffff810b0b80>] ? mempool_alloc_slab+0x10/0x20
Nov 20 19:27:23 thoregon kernel: [ 2160.596242]  [<ffffffff814f80bb>] md_make_request+0x21b/0x4d0
Nov 20 19:27:23 thoregon kernel: [ 2160.596249]  [<ffffffff814f7ef0>] ? md_make_request+0x50/0x4d0
Nov 20 19:27:23 thoregon kernel: [ 2160.596258]  [<ffffffff81276e52>] generic_make_request+0xc2/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.596267]  [<ffffffff81276ef5>] submit_bio+0x65/0x110
Nov 20 19:27:23 thoregon kernel: [ 2160.596278]  [<ffffffff811e07bf>] xfs_submit_ioend_bio.isra.21+0x2f/0x40
Nov 20 19:27:23 thoregon kernel: [ 2160.596286]  [<ffffffff811e088e>] xfs_submit_ioend+0xbe/0x110
Nov 20 19:27:23 thoregon kernel: [ 2160.596295]  [<ffffffff811e0c91>] xfs_vm_writepage+0x3b1/0x540
Nov 20 19:27:23 thoregon kernel: [ 2160.596305]  [<ffffffff810bcd84>] shrink_page_list+0x564/0x890
Nov 20 19:27:23 thoregon kernel: [ 2160.596315]  [<ffffffff810bd637>] shrink_inactive_list+0x1d7/0x310
Nov 20 19:27:23 thoregon kernel: [ 2160.596325]  [<ffffffff810bdb9d>] shrink_lruvec+0x42d/0x530
Nov 20 19:27:23 thoregon kernel: [ 2160.596334]  [<ffffffff810be323>] kswapd+0x683/0xa20
Nov 20 19:27:23 thoregon kernel: [ 2160.596344]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596352]  [<ffffffff810bdca0>] ? shrink_lruvec+0x530/0x530
Nov 20 19:27:23 thoregon kernel: [ 2160.596360]  [<ffffffff8105c246>] kthread+0xd6/0xe0
Nov 20 19:27:23 thoregon kernel: [ 2160.596367]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.596376]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:27:23 thoregon kernel: [ 2160.596384]  [<ffffffff816b31ac>] ret_from_fork+0x7c/0xb0
Nov 20 19:27:23 thoregon kernel: [ 2160.596392]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:27:23 thoregon kernel: [ 2160.596397] no locks held by kswapd0/725.
Nov 20 19:27:23 thoregon kernel: [ 2160.596424] INFO: task xfsaild/md4:1742 blocked for more than 120 seconds.
Nov 20 19:27:23 thoregon kernel: [ 2160.596428] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 19:27:23 thoregon kernel: [ 2160.596432] xfsaild/md4     D 0000000000000003     0  1742      2 0x00000000
Nov 20 19:27:23 thoregon kernel: [ 2160.596441]  ffff88032438bb68 0000000000000046 ffff880329965700 ffff88032438bfd8
Nov 20 19:27:23 thoregon kernel: [ 2160.596452]  ffff88032438bfd8 ffff88032438bfd8 ffff88032827e580 ffff880329965700
Nov 20 19:27:23 thoregon kernel: [ 2160.596462]  ffff88032438bb78 ffff8803278dbbd0 ffff8803278db800 00000000ffffffff
Nov 20 19:27:23 thoregon kernel: [ 2160.596473] Call Trace:
Nov 20 19:27:23 thoregon kernel: [ 2160.596483]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596490]  [<ffffffff814f9dad>] md_super_wait+0x4d/0x80
Nov 20 19:27:23 thoregon kernel: [ 2160.596498]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596507]  [<ffffffff81500753>] bitmap_unplug+0x173/0x180
Nov 20 19:27:23 thoregon kernel: [ 2160.596516]  [<ffffffff81278c13>] ? blk_finish_plug+0x13/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.596524]  [<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
Nov 20 19:27:23 thoregon kernel: [ 2160.596533]  [<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
Nov 20 19:27:23 thoregon kernel: [ 2160.596542]  [<ffffffff81278c13>] blk_finish_plug+0x13/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.596550]  [<ffffffff811e296a>] __xfs_buf_delwri_submit+0x1ca/0x1e0
Nov 20 19:27:23 thoregon kernel: [ 2160.596558]  [<ffffffff811e2ffb>] xfs_buf_delwri_submit_nowait+0x1b/0x20
Nov 20 19:27:23 thoregon kernel: [ 2160.596566]  [<ffffffff81233066>] xfsaild+0x226/0x4c0
Nov 20 19:27:23 thoregon kernel: [ 2160.596575]  [<ffffffff81065dfa>] ? finish_task_switch+0x3a/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.596583]  [<ffffffff81232e40>] ? xfs_trans_ail_cursor_first+0xa0/0xa0
Nov 20 19:27:23 thoregon kernel: [ 2160.596591]  [<ffffffff8105c246>] kthread+0xd6/0xe0
Nov 20 19:27:23 thoregon kernel: [ 2160.596598]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.596607]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:27:23 thoregon kernel: [ 2160.596614]  [<ffffffff816b31ac>] ret_from_fork+0x7c/0xb0
Nov 20 19:27:23 thoregon kernel: [ 2160.596622]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:27:23 thoregon kernel: [ 2160.596627] no locks held by xfsaild/md4/1742.
Nov 20 19:27:23 thoregon kernel: [ 2160.596709] INFO: task flush-9:4:30365 blocked for more than 120 seconds.
Nov 20 19:27:23 thoregon kernel: [ 2160.596712] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 19:27:23 thoregon kernel: [ 2160.596716] flush-9:4       D 0000000000000001     0 30365      2 0x00000000
Nov 20 19:27:23 thoregon kernel: [ 2160.596725]  ffff8801d8ce1978 0000000000000046 ffff880328238e80 ffff8801d8ce1fd8
Nov 20 19:27:23 thoregon kernel: [ 2160.596735]  ffff8801d8ce1fd8 ffff8801d8ce1fd8 ffff8803032cc880 ffff880328238e80
Nov 20 19:27:23 thoregon kernel: [ 2160.596745]  ffff8801d8ce1988 ffff8803278dbbd0 ffff8803278db800 00000000ffffffff
Nov 20 19:27:23 thoregon kernel: [ 2160.596756] Call Trace:
Nov 20 19:27:23 thoregon kernel: [ 2160.596765]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596773]  [<ffffffff814f9dad>] md_super_wait+0x4d/0x80
Nov 20 19:27:23 thoregon kernel: [ 2160.596781]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596788]  [<ffffffff81500753>] bitmap_unplug+0x173/0x180
Nov 20 19:27:23 thoregon kernel: [ 2160.596796]  [<ffffffff810b6acf>] ? write_cache_pages+0x12f/0x420
Nov 20 19:27:23 thoregon kernel: [ 2160.596804]  [<ffffffff810b6700>] ? set_page_dirty_lock+0x60/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596812]  [<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
Nov 20 19:27:23 thoregon kernel: [ 2160.596821]  [<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
Nov 20 19:27:23 thoregon kernel: [ 2160.596830]  [<ffffffff81278c13>] blk_finish_plug+0x13/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.596838]  [<ffffffff810b6e0a>] generic_writepages+0x4a/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596847]  [<ffffffff811df898>] xfs_vm_writepages+0x48/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.596856]  [<ffffffff8111b144>] ? writeback_sb_inodes+0x174/0x410
Nov 20 19:27:23 thoregon kernel: [ 2160.596864]  [<ffffffff810b84be>] do_writepages+0x1e/0x30
Nov 20 19:27:23 thoregon kernel: [ 2160.596873]  [<ffffffff8111a1ee>] __writeback_single_inode+0x3e/0x120
Nov 20 19:27:23 thoregon kernel: [ 2160.596882]  [<ffffffff8111b234>] writeback_sb_inodes+0x264/0x410
Nov 20 19:27:23 thoregon kernel: [ 2160.596891]  [<ffffffff8111b476>] __writeback_inodes_wb+0x96/0xc0
Nov 20 19:27:23 thoregon kernel: [ 2160.596900]  [<ffffffff8111b673>] wb_writeback+0x1d3/0x1e0
Nov 20 19:27:23 thoregon kernel: [ 2160.596908]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.596916]  [<ffffffff8111bb1c>] ? wb_do_writeback+0x7c/0x140
Nov 20 19:27:23 thoregon kernel: [ 2160.596925]  [<ffffffff8111bb40>] wb_do_writeback+0xa0/0x140
Nov 20 19:27:23 thoregon kernel: [ 2160.596935]  [<ffffffff8111bc52>] bdi_writeback_thread+0x72/0x150
Nov 20 19:27:23 thoregon kernel: [ 2160.596944]  [<ffffffff8111bbe0>] ? wb_do_writeback+0x140/0x140
Nov 20 19:27:23 thoregon kernel: [ 2160.596951]  [<ffffffff8105c246>] kthread+0xd6/0xe0
Nov 20 19:27:23 thoregon kernel: [ 2160.596958]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.596967]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:27:23 thoregon kernel: [ 2160.596975]  [<ffffffff816b31ac>] ret_from_fork+0x7c/0xb0
Nov 20 19:27:23 thoregon kernel: [ 2160.596983]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:27:23 thoregon kernel: [ 2160.596988] 1 lock held by flush-9:4/30365:
Nov 20 19:27:23 thoregon kernel: [ 2160.596991]  #0:  (&type->s_umount_key#20){++++.+}, at: [<ffffffff810f59de>] grab_super_passive+0x3e/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.597019] INFO: task x86_64-pc-linux:26516 blocked for more than 120 seconds.
Nov 20 19:27:23 thoregon kernel: [ 2160.597023] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 19:27:23 thoregon kernel: [ 2160.597027] x86_64-pc-linux D 0000000000000000     0 26516  26114 0x00000000
Nov 20 19:27:23 thoregon kernel: [ 2160.597035]  ffff8800468dddf8 0000000000000046 ffff880329303a00 ffff8800468ddfd8
Nov 20 19:27:23 thoregon kernel: [ 2160.597045]  ffff8800468ddfd8 ffff8800468ddfd8 ffff88032989c880 ffff880329303a00
Nov 20 19:27:23 thoregon kernel: [ 2160.597056]  0000000000000246 ffff8800468dc000 ffff8801b1a1bc68 0000000002405150
Nov 20 19:27:23 thoregon kernel: [ 2160.597066] Call Trace:
Nov 20 19:27:23 thoregon kernel: [ 2160.597076]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.597085]  [<ffffffff816b1501>] schedule_preempt_disabled+0x21/0x30
Nov 20 19:27:23 thoregon kernel: [ 2160.597094]  [<ffffffff816af3a5>] mutex_lock_nested+0x165/0x330
Nov 20 19:27:23 thoregon kernel: [ 2160.597103]  [<ffffffff810ff5e3>] ? do_unlinkat+0xa3/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.597112]  [<ffffffff810fefae>] ? filename_lookup.isra.57+0x2e/0x80
Nov 20 19:27:23 thoregon kernel: [ 2160.597121]  [<ffffffff810ff5e3>] do_unlinkat+0xa3/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.597129]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.597137]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:27:23 thoregon kernel: [ 2160.597146]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:27:23 thoregon kernel: [ 2160.597154]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:27:23 thoregon kernel: [ 2160.597159] 2 locks held by x86_64-pc-linux/26516:
Nov 20 19:27:23 thoregon kernel: [ 2160.597162]  #0:  (sb_writers#6){.+.+.+}, at: [<ffffffff811135ef>] mnt_want_write+0x1f/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597178]  #1:  (&type->i_mutex_dir_key#3/1){+.+.+.}, at: [<ffffffff810ff5e3>] do_unlinkat+0xa3/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.597198] INFO: task x86_64-pc-linux:26523 blocked for more than 120 seconds.
Nov 20 19:27:23 thoregon kernel: [ 2160.597201] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 19:27:23 thoregon kernel: [ 2160.597205] x86_64-pc-linux D 0000000000000000     0 26523  26114 0x00000000
Nov 20 19:27:23 thoregon kernel: [ 2160.597212]  ffff8800c0def858 0000000000000046 ffff88032986c880 ffff8800c0deffd8
Nov 20 19:27:23 thoregon kernel: [ 2160.597223]  ffff8800c0deffd8 ffff8800c0deffd8 ffff88032989e580 ffff88032986c880
Nov 20 19:27:23 thoregon kernel: [ 2160.597233]  ffff8800c0def928 ffff8801c57e3a30 7fffffffffffffff ffff88032986c880
Nov 20 19:27:23 thoregon kernel: [ 2160.597243] Call Trace:
Nov 20 19:27:23 thoregon kernel: [ 2160.597253]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.597261]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:27:23 thoregon kernel: [ 2160.597269]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597277]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.597286]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:27:23 thoregon kernel: [ 2160.597294]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:27:23 thoregon kernel: [ 2160.597301]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597308]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.597315]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:27:23 thoregon kernel: [ 2160.597323]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:27:23 thoregon kernel: [ 2160.597331]  [<ffffffff81210417>] ? xfs_dabuf_map.isra.23+0xb7/0x2f0
Nov 20 19:27:23 thoregon kernel: [ 2160.597339]  [<ffffffff810b572e>] ? get_page_from_freelist+0x1de/0x460
Nov 20 19:27:23 thoregon kernel: [ 2160.597346]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.597353]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:27:23 thoregon kernel: [ 2160.597361]  [<ffffffff812116b6>] xfs_da_read_buf+0xb6/0x180
Nov 20 19:27:23 thoregon kernel: [ 2160.597371]  [<ffffffff8121724c>] xfs_dir2_leaf_lookup_int+0x5c/0x2c0
Nov 20 19:27:23 thoregon kernel: [ 2160.597380]  [<ffffffff81217583>] xfs_dir2_leaf_removename+0x33/0x330
Nov 20 19:27:23 thoregon kernel: [ 2160.597389]  [<ffffffff8120777b>] ? xfs_bmap_last_offset+0x8b/0xc0
Nov 20 19:27:23 thoregon kernel: [ 2160.597398]  [<ffffffff81213b94>] xfs_dir_removename+0x134/0x150
Nov 20 19:27:23 thoregon kernel: [ 2160.597406]  [<ffffffff811f5ce6>] xfs_remove+0x1f6/0x370
Nov 20 19:27:23 thoregon kernel: [ 2160.597416]  [<ffffffff816af4ac>] ? mutex_lock_nested+0x26c/0x330
Nov 20 19:27:23 thoregon kernel: [ 2160.597424]  [<ffffffff810ff48f>] ? vfs_unlink+0x4f/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.597432]  [<ffffffff811ec7f3>] xfs_vn_unlink+0x43/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.597441]  [<ffffffff810ff4cd>] vfs_unlink+0x8d/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.597449]  [<ffffffff810ff63a>] do_unlinkat+0xfa/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.597458]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.597466]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:27:23 thoregon kernel: [ 2160.597475]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:27:23 thoregon kernel: [ 2160.597482]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:27:23 thoregon kernel: [ 2160.597487] 6 locks held by x86_64-pc-linux/26523:
Nov 20 19:27:23 thoregon kernel: [ 2160.597490]  #0:  (sb_writers#6){.+.+.+}, at: [<ffffffff811135ef>] mnt_want_write+0x1f/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597505]  #1:  (&type->i_mutex_dir_key#3/1){+.+.+.}, at: [<ffffffff810ff5e3>] do_unlinkat+0xa3/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.597522]  #2:  (&sb->s_type->i_mutex_key#7){+.+.+.}, at: [<ffffffff810ff48f>] vfs_unlink+0x4f/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.597538]  #3:  (sb_internal){.+.+.?}, at: [<ffffffff8122b3e8>] xfs_trans_alloc+0x28/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597552]  #4:  (&(&ip->i_lock)->mr_lock/4){+.+...}, at: [<ffffffff811e8414>] xfs_ilock+0x84/0xb0
Nov 20 19:27:23 thoregon kernel: [ 2160.597567]  #5:  (&(&ip->i_lock)->mr_lock/5){+.+...}, at: [<ffffffff811e8414>] xfs_ilock+0x84/0xb0
Nov 20 19:27:23 thoregon kernel: [ 2160.597584] INFO: task x86_64-pc-linux:26875 blocked for more than 120 seconds.
Nov 20 19:27:23 thoregon kernel: [ 2160.597587] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 19:27:23 thoregon kernel: [ 2160.597591] x86_64-pc-linux D 0000000000000000     0 26875  26114 0x00000000
Nov 20 19:27:23 thoregon kernel: [ 2160.597598]  ffff8800c0c5ba08 0000000000000046 ffff8802f0594880 ffff8800c0c5bfd8
Nov 20 19:27:23 thoregon kernel: [ 2160.597609]  ffff8800c0c5bfd8 ffff8800c0c5bfd8 ffff88032989c880 ffff8802f0594880
Nov 20 19:27:23 thoregon kernel: [ 2160.597619]  0000000000000250 ffff880205a7aa30 7fffffffffffffff ffff8802f0594880
Nov 20 19:27:23 thoregon kernel: [ 2160.597630] Call Trace:
Nov 20 19:27:23 thoregon kernel: [ 2160.597639]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.597648]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:27:23 thoregon kernel: [ 2160.597655]  [<ffffffff811f70b4>] ? kmem_zone_zalloc+0x34/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597662]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597670]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.597679]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:27:23 thoregon kernel: [ 2160.597688]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:27:23 thoregon kernel: [ 2160.597695]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597702]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.597708]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:27:23 thoregon kernel: [ 2160.597716]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:27:23 thoregon kernel: [ 2160.597723]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.597730]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:27:23 thoregon kernel: [ 2160.597739]  [<ffffffff8121ce6f>] xfs_read_agi+0x7f/0x110
Nov 20 19:27:23 thoregon kernel: [ 2160.597748]  [<ffffffff8121fb55>] xfs_iunlink+0x45/0x150
Nov 20 19:27:23 thoregon kernel: [ 2160.597757]  [<ffffffff81041fc1>] ? current_fs_time+0x11/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597764]  [<ffffffff81234352>] ? xfs_trans_ichgtime+0x22/0xa0
Nov 20 19:27:23 thoregon kernel: [ 2160.597772]  [<ffffffff811f3ea0>] xfs_droplink+0x50/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.597778]  [<ffffffff811f5d77>] xfs_remove+0x287/0x370
Nov 20 19:27:23 thoregon kernel: [ 2160.597788]  [<ffffffff816af4ac>] ? mutex_lock_nested+0x26c/0x330
Nov 20 19:27:23 thoregon kernel: [ 2160.597796]  [<ffffffff810ff48f>] ? vfs_unlink+0x4f/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.597804]  [<ffffffff811ec7f3>] xfs_vn_unlink+0x43/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.597813]  [<ffffffff810ff4cd>] vfs_unlink+0x8d/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.597821]  [<ffffffff810ff63a>] do_unlinkat+0xfa/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.597830]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.597837]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:27:23 thoregon kernel: [ 2160.597846]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:27:23 thoregon kernel: [ 2160.597854]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:27:23 thoregon kernel: [ 2160.597859] 6 locks held by x86_64-pc-linux/26875:
Nov 20 19:27:23 thoregon kernel: [ 2160.597862]  #0:  (sb_writers#6){.+.+.+}, at: [<ffffffff811135ef>] mnt_want_write+0x1f/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597877]  #1:  (&type->i_mutex_dir_key#3/1){+.+.+.}, at: [<ffffffff810ff5e3>] do_unlinkat+0xa3/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.597894]  #2:  (&sb->s_type->i_mutex_key#7){+.+.+.}, at: [<ffffffff810ff48f>] vfs_unlink+0x4f/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.597909]  #3:  (sb_internal){.+.+.?}, at: [<ffffffff8122b3e8>] xfs_trans_alloc+0x28/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.597923]  #4:  (&(&ip->i_lock)->mr_lock/4){+.+...}, at: [<ffffffff811e8414>] xfs_ilock+0x84/0xb0
Nov 20 19:27:23 thoregon kernel: [ 2160.597937]  #5:  (&(&ip->i_lock)->mr_lock/5){+.+...}, at: [<ffffffff811e8414>] xfs_ilock+0x84/0xb0
Nov 20 19:27:23 thoregon kernel: [ 2160.597955] INFO: task x86_64-pc-linux:26897 blocked for more than 120 seconds.
Nov 20 19:27:23 thoregon kernel: [ 2160.597958] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 19:27:23 thoregon kernel: [ 2160.597961] x86_64-pc-linux D 0000000000000001     0 26897  26114 0x00000000
Nov 20 19:27:23 thoregon kernel: [ 2160.597969]  ffff880101959858 0000000000000046 ffff88032934ab80 ffff880101959fd8
Nov 20 19:27:23 thoregon kernel: [ 2160.597980]  ffff880101959fd8 ffff880101959fd8 ffff880162a3e580 ffff88032934ab80
Nov 20 19:27:23 thoregon kernel: [ 2160.597990]  ffff880101959928 ffff8801ca545030 7fffffffffffffff ffff88032934ab80
Nov 20 19:27:23 thoregon kernel: [ 2160.598000] Call Trace:
Nov 20 19:27:23 thoregon kernel: [ 2160.598010]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.598018]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:27:23 thoregon kernel: [ 2160.598026]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.598034]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.598043]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:27:23 thoregon kernel: [ 2160.598052]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:27:23 thoregon kernel: [ 2160.598058]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.598065]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.598072]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:27:23 thoregon kernel: [ 2160.598079]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:27:23 thoregon kernel: [ 2160.598087]  [<ffffffff81210417>] ? xfs_dabuf_map.isra.23+0xb7/0x2f0
Nov 20 19:27:23 thoregon kernel: [ 2160.598094]  [<ffffffff810b572e>] ? get_page_from_freelist+0x1de/0x460
Nov 20 19:27:23 thoregon kernel: [ 2160.598102]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.598109]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:27:23 thoregon kernel: [ 2160.598117]  [<ffffffff812116b6>] xfs_da_read_buf+0xb6/0x180
Nov 20 19:27:23 thoregon kernel: [ 2160.598126]  [<ffffffff8121724c>] xfs_dir2_leaf_lookup_int+0x5c/0x2c0
Nov 20 19:27:23 thoregon kernel: [ 2160.598135]  [<ffffffff81217583>] xfs_dir2_leaf_removename+0x33/0x330
Nov 20 19:27:23 thoregon kernel: [ 2160.598144]  [<ffffffff8120777b>] ? xfs_bmap_last_offset+0x8b/0xc0
Nov 20 19:27:23 thoregon kernel: [ 2160.598152]  [<ffffffff81213b94>] xfs_dir_removename+0x134/0x150
Nov 20 19:27:23 thoregon kernel: [ 2160.598160]  [<ffffffff811f5ce6>] xfs_remove+0x1f6/0x370
Nov 20 19:27:23 thoregon kernel: [ 2160.598170]  [<ffffffff816af4ac>] ? mutex_lock_nested+0x26c/0x330
Nov 20 19:27:23 thoregon kernel: [ 2160.598178]  [<ffffffff810ff48f>] ? vfs_unlink+0x4f/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.598186]  [<ffffffff811ec7f3>] xfs_vn_unlink+0x43/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.598195]  [<ffffffff810ff4cd>] vfs_unlink+0x8d/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.598203]  [<ffffffff810ff63a>] do_unlinkat+0xfa/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.598211]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.598219]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:27:23 thoregon kernel: [ 2160.598228]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:27:23 thoregon kernel: [ 2160.598236]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:27:23 thoregon kernel: [ 2160.598241] 6 locks held by x86_64-pc-linux/26897:
Nov 20 19:27:23 thoregon kernel: [ 2160.598244]  #0:  (sb_writers#6){.+.+.+}, at: [<ffffffff811135ef>] mnt_want_write+0x1f/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.598258]  #1:  (&type->i_mutex_dir_key#3/1){+.+.+.}, at: [<ffffffff810ff5e3>] do_unlinkat+0xa3/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.598275]  #2:  (&sb->s_type->i_mutex_key#7){+.+.+.}, at: [<ffffffff810ff48f>] vfs_unlink+0x4f/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.598290]  #3:  (sb_internal){.+.+.?}, at: [<ffffffff8122b3e8>] xfs_trans_alloc+0x28/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.598304]  #4:  (&(&ip->i_lock)->mr_lock/4){+.+...}, at: [<ffffffff811e8414>] xfs_ilock+0x84/0xb0
Nov 20 19:27:23 thoregon kernel: [ 2160.598318]  #5:  (&(&ip->i_lock)->mr_lock){++++--}, at: [<ffffffff811e854c>] xfs_ilock_nowait+0xbc/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.598334] INFO: task x86_64-pc-linux:26904 blocked for more than 120 seconds.
Nov 20 19:27:23 thoregon kernel: [ 2160.598337] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 19:27:23 thoregon kernel: [ 2160.598341] x86_64-pc-linux D 0000000000000001     0 26904  26114 0x00000000
Nov 20 19:27:23 thoregon kernel: [ 2160.598348]  ffff880066bff858 0000000000000046 ffff880329301d00 ffff880066bfffd8
Nov 20 19:27:23 thoregon kernel: [ 2160.598359]  ffff880066bfffd8 ffff880066bfffd8 ffff88013a7ce580 ffff880329301d00
Nov 20 19:27:23 thoregon kernel: [ 2160.598369]  ffff880066bff928 ffff8801c572c830 7fffffffffffffff ffff880329301d00
Nov 20 19:27:23 thoregon kernel: [ 2160.598380] Call Trace:
Nov 20 19:27:23 thoregon kernel: [ 2160.598389]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.598398]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:27:23 thoregon kernel: [ 2160.598405]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.598413]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.598422]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:27:23 thoregon kernel: [ 2160.598431]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:27:23 thoregon kernel: [ 2160.598437]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.598444]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.598451]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:27:23 thoregon kernel: [ 2160.598458]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:27:23 thoregon kernel: [ 2160.598466]  [<ffffffff81210417>] ? xfs_dabuf_map.isra.23+0xb7/0x2f0
Nov 20 19:27:23 thoregon kernel: [ 2160.598474]  [<ffffffff810b572e>] ? get_page_from_freelist+0x1de/0x460
Nov 20 19:27:23 thoregon kernel: [ 2160.598481]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.598488]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:27:23 thoregon kernel: [ 2160.598496]  [<ffffffff812116b6>] xfs_da_read_buf+0xb6/0x180
Nov 20 19:27:23 thoregon kernel: [ 2160.598505]  [<ffffffff8121724c>] xfs_dir2_leaf_lookup_int+0x5c/0x2c0
Nov 20 19:27:23 thoregon kernel: [ 2160.598515]  [<ffffffff81217583>] xfs_dir2_leaf_removename+0x33/0x330
Nov 20 19:27:23 thoregon kernel: [ 2160.598523]  [<ffffffff8120777b>] ? xfs_bmap_last_offset+0x8b/0xc0
Nov 20 19:27:23 thoregon kernel: [ 2160.598532]  [<ffffffff81213b94>] xfs_dir_removename+0x134/0x150
Nov 20 19:27:23 thoregon kernel: [ 2160.598540]  [<ffffffff811f5ce6>] xfs_remove+0x1f6/0x370
Nov 20 19:27:23 thoregon kernel: [ 2160.598549]  [<ffffffff816af4ac>] ? mutex_lock_nested+0x26c/0x330
Nov 20 19:27:23 thoregon kernel: [ 2160.598557]  [<ffffffff810ff48f>] ? vfs_unlink+0x4f/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.598565]  [<ffffffff811ec7f3>] xfs_vn_unlink+0x43/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.598574]  [<ffffffff810ff4cd>] vfs_unlink+0x8d/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.598582]  [<ffffffff810ff63a>] do_unlinkat+0xfa/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.598591]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.598598]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:27:23 thoregon kernel: [ 2160.598607]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:27:23 thoregon kernel: [ 2160.598615]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:27:23 thoregon kernel: [ 2160.598620] 6 locks held by x86_64-pc-linux/26904:
Nov 20 19:27:23 thoregon kernel: [ 2160.598623]  #0:  (sb_writers#6){.+.+.+}, at: [<ffffffff811135ef>] mnt_want_write+0x1f/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.598637]  #1:  (&type->i_mutex_dir_key#3/1){+.+.+.}, at: [<ffffffff810ff5e3>] do_unlinkat+0xa3/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.598654]  #2:  (&sb->s_type->i_mutex_key#7){+.+.+.}, at: [<ffffffff810ff48f>] vfs_unlink+0x4f/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.598669]  #3:  (sb_internal){.+.+.?}, at: [<ffffffff8122b3e8>] xfs_trans_alloc+0x28/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.598683]  #4:  (&(&ip->i_lock)->mr_lock/4){+.+...}, at: [<ffffffff811e8414>] xfs_ilock+0x84/0xb0
Nov 20 19:27:23 thoregon kernel: [ 2160.598697]  #5:  (&(&ip->i_lock)->mr_lock){++++--}, at: [<ffffffff811e854c>] xfs_ilock_nowait+0xbc/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.598713] INFO: task x86_64-pc-linux:26907 blocked for more than 120 seconds.
Nov 20 19:27:23 thoregon kernel: [ 2160.598716] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 19:27:23 thoregon kernel: [ 2160.598720] x86_64-pc-linux D 0000000000000000     0 26907  26114 0x00000000
Nov 20 19:27:23 thoregon kernel: [ 2160.598727]  ffff8800c0eed858 0000000000000046 ffff8802a7be6580 ffff8800c0eedfd8
Nov 20 19:27:23 thoregon kernel: [ 2160.598738]  ffff8800c0eedfd8 ffff8800c0eedfd8 ffff88032989e580 ffff8802a7be6580
Nov 20 19:27:23 thoregon kernel: [ 2160.598748]  ffff8800c0eed928 ffff8801c55c7630 7fffffffffffffff ffff8802a7be6580
Nov 20 19:27:23 thoregon kernel: [ 2160.598758] Call Trace:
Nov 20 19:27:23 thoregon kernel: [ 2160.598768]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.598776]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:27:23 thoregon kernel: [ 2160.598784]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.598792]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.598801]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:27:23 thoregon kernel: [ 2160.598809]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:27:23 thoregon kernel: [ 2160.598816]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.598823]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:27:23 thoregon kernel: [ 2160.598830]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:27:23 thoregon kernel: [ 2160.598837]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:27:23 thoregon kernel: [ 2160.598844]  [<ffffffff81210417>] ? xfs_dabuf_map.isra.23+0xb7/0x2f0
Nov 20 19:27:23 thoregon kernel: [ 2160.598852]  [<ffffffff810b572e>] ? get_page_from_freelist+0x1de/0x460
Nov 20 19:27:23 thoregon kernel: [ 2160.598859]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.598867]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:27:23 thoregon kernel: [ 2160.598874]  [<ffffffff812116b6>] xfs_da_read_buf+0xb6/0x180
Nov 20 19:27:23 thoregon kernel: [ 2160.598884]  [<ffffffff8121724c>] xfs_dir2_leaf_lookup_int+0x5c/0x2c0
Nov 20 19:27:23 thoregon kernel: [ 2160.598893]  [<ffffffff81217583>] xfs_dir2_leaf_removename+0x33/0x330
Nov 20 19:27:23 thoregon kernel: [ 2160.598901]  [<ffffffff8120777b>] ? xfs_bmap_last_offset+0x8b/0xc0
Nov 20 19:27:23 thoregon kernel: [ 2160.598910]  [<ffffffff81213b94>] xfs_dir_removename+0x134/0x150
Nov 20 19:27:23 thoregon kernel: [ 2160.598918]  [<ffffffff811f5ce6>] xfs_remove+0x1f6/0x370
Nov 20 19:27:23 thoregon kernel: [ 2160.598927]  [<ffffffff816af4ac>] ? mutex_lock_nested+0x26c/0x330
Nov 20 19:27:23 thoregon kernel: [ 2160.598935]  [<ffffffff810ff48f>] ? vfs_unlink+0x4f/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.598944]  [<ffffffff811ec7f3>] xfs_vn_unlink+0x43/0x90
Nov 20 19:27:23 thoregon kernel: [ 2160.598952]  [<ffffffff810ff4cd>] vfs_unlink+0x8d/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.598961]  [<ffffffff810ff63a>] do_unlinkat+0xfa/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.598969]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:27:23 thoregon kernel: [ 2160.598977]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:27:23 thoregon kernel: [ 2160.598986]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:27:23 thoregon kernel: [ 2160.598993]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:27:23 thoregon kernel: [ 2160.598998] 6 locks held by x86_64-pc-linux/26907:
Nov 20 19:27:23 thoregon kernel: [ 2160.599001]  #0:  (sb_writers#6){.+.+.+}, at: [<ffffffff811135ef>] mnt_want_write+0x1f/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.599016]  #1:  (&type->i_mutex_dir_key#3/1){+.+.+.}, at: [<ffffffff810ff5e3>] do_unlinkat+0xa3/0x1b0
Nov 20 19:27:23 thoregon kernel: [ 2160.599033]  #2:  (&sb->s_type->i_mutex_key#7){+.+.+.}, at: [<ffffffff810ff48f>] vfs_unlink+0x4f/0x100
Nov 20 19:27:23 thoregon kernel: [ 2160.599048]  #3:  (sb_internal){.+.+.?}, at: [<ffffffff8122b3e8>] xfs_trans_alloc+0x28/0x50
Nov 20 19:27:23 thoregon kernel: [ 2160.599062]  #4:  (&(&ip->i_lock)->mr_lock/4){+.+...}, at: [<ffffffff811e8414>] xfs_ilock+0x84/0xb0
Nov 20 19:27:23 thoregon kernel: [ 2160.599076]  #5:  (&(&ip->i_lock)->mr_lock){++++--}, at: [<ffffffff811e854c>] xfs_ilock_nowait+0xbc/0x100
Nov 20 19:27:41 thoregon login[3002]: pam_unix(login:session): session closed for user root
Nov 20 19:27:43 thoregon acpid: client connected from 2991[0:0]
Nov 20 19:27:43 thoregon acpid: 1 client rule loaded
Nov 20 19:29:23 thoregon kernel: [ 2280.601023] INFO: task kswapd0:725 blocked for more than 120 seconds.
Nov 20 19:29:23 thoregon kernel: [ 2280.601030] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 20 19:29:23 thoregon kernel: [ 2280.601036] kswapd0         D 0000000000000001     0   725      2 0x00000000
Nov 20 19:29:23 thoregon kernel: [ 2280.601048]  ffff8803280d13f8 0000000000000046 ffff880329a0ab80 ffff8803280d1fd8
Nov 20 19:29:23 thoregon kernel: [ 2280.601063]  ffff8803280d1fd8 ffff8803280d1fd8 ffff880046b7c880 ffff880329a0ab80
Nov 20 19:29:23 thoregon kernel: [ 2280.601073]  ffff8803280d1408 ffff8803278dbbd0 ffff8803278db800 00000000ffffffff
Nov 20 19:29:23 thoregon kernel: [ 2280.601084] Call Trace:
Nov 20 19:29:23 thoregon kernel: [ 2280.601101]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:29:23 thoregon kernel: [ 2280.601111]  [<ffffffff814f9dad>] md_super_wait+0x4d/0x80
Nov 20 19:29:23 thoregon kernel: [ 2280.601121]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:29:23 thoregon kernel: [ 2280.601129]  [<ffffffff81500753>] bitmap_unplug+0x173/0x180
Nov 20 19:29:23 thoregon kernel: [ 2280.601138]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:29:23 thoregon kernel: [ 2280.601147]  [<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
Nov 20 19:29:23 thoregon kernel: [ 2280.601157]  [<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
Nov 20 19:29:23 thoregon kernel: [ 2280.601165]  [<ffffffff816b24ca>] ? _raw_spin_unlock_irqrestore+0x3a/0x70
Nov 20 19:29:23 thoregon kernel: [ 2280.601175]  [<ffffffff816b15c3>] io_schedule_timeout+0x83/0xf0
Nov 20 19:29:23 thoregon kernel: [ 2280.601182]  [<ffffffff810815cd>] ? trace_hardirqs_on+0xd/0x10
Nov 20 19:29:23 thoregon kernel: [ 2280.601192]  [<ffffffff810b0e1d>] mempool_alloc+0x12d/0x160
Nov 20 19:29:23 thoregon kernel: [ 2280.601200]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:29:23 thoregon kernel: [ 2280.601209]  [<ffffffff811263da>] bvec_alloc_bs+0xda/0x100
Nov 20 19:29:23 thoregon kernel: [ 2280.601217]  [<ffffffff811264ea>] bio_alloc_bioset+0xea/0x110
Nov 20 19:29:23 thoregon kernel: [ 2280.601226]  [<ffffffff81126656>] bio_clone_bioset+0x16/0x40
Nov 20 19:29:23 thoregon kernel: [ 2280.601233]  [<ffffffff814f471a>] bio_clone_mddev+0x1a/0x30
Nov 20 19:29:23 thoregon kernel: [ 2280.601241]  [<ffffffff814edbb1>] make_request+0x551/0xde0
Nov 20 19:29:23 thoregon kernel: [ 2280.601249]  [<ffffffff814ed9e0>] ? make_request+0x380/0xde0
Nov 20 19:29:23 thoregon kernel: [ 2280.601257]  [<ffffffff810b0b80>] ? mempool_alloc_slab+0x10/0x20
Nov 20 19:29:23 thoregon kernel: [ 2280.601266]  [<ffffffff814f80bb>] md_make_request+0x21b/0x4d0
Nov 20 19:29:23 thoregon kernel: [ 2280.601273]  [<ffffffff814f7ef0>] ? md_make_request+0x50/0x4d0
Nov 20 19:29:23 thoregon kernel: [ 2280.601282]  [<ffffffff81276e52>] generic_make_request+0xc2/0x100
Nov 20 19:29:23 thoregon kernel: [ 2280.601291]  [<ffffffff81276ef5>] submit_bio+0x65/0x110
Nov 20 19:29:23 thoregon kernel: [ 2280.601301]  [<ffffffff811e07bf>] xfs_submit_ioend_bio.isra.21+0x2f/0x40
Nov 20 19:29:23 thoregon kernel: [ 2280.601309]  [<ffffffff811e088e>] xfs_submit_ioend+0xbe/0x110
Nov 20 19:29:23 thoregon kernel: [ 2280.601318]  [<ffffffff811e0c91>] xfs_vm_writepage+0x3b1/0x540
Nov 20 19:29:23 thoregon kernel: [ 2280.601328]  [<ffffffff810bcd84>] shrink_page_list+0x564/0x890
Nov 20 19:29:23 thoregon kernel: [ 2280.601338]  [<ffffffff810bd637>] shrink_inactive_list+0x1d7/0x310
Nov 20 19:29:23 thoregon kernel: [ 2280.601347]  [<ffffffff810bdb9d>] shrink_lruvec+0x42d/0x530
Nov 20 19:29:23 thoregon kernel: [ 2280.601356]  [<ffffffff810be323>] kswapd+0x683/0xa20
Nov 20 19:29:23 thoregon kernel: [ 2280.601365]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:29:23 thoregon kernel: [ 2280.601374]  [<ffffffff810bdca0>] ? shrink_lruvec+0x530/0x530
Nov 20 19:29:23 thoregon kernel: [ 2280.601381]  [<ffffffff8105c246>] kthread+0xd6/0xe0
Nov 20 19:29:23 thoregon kernel: [ 2280.601388]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:29:23 thoregon kernel: [ 2280.601397]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:29:23 thoregon kernel: [ 2280.601404]  [<ffffffff816b31ac>] ret_from_fork+0x7c/0xb0
Nov 20 19:29:23 thoregon kernel: [ 2280.601412]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:29:23 thoregon kernel: [ 2280.601417] no locks held by kswapd0/725.
Nov 20 19:30:01 thoregon cron[27484]: (root) CMD (test -x /usr/sbin/run-crons && /usr/sbin/run-crons)
Nov 20 19:35:16 thoregon kernel: [ 2633.536440] SysRq : Show Blocked State
Nov 20 19:35:16 thoregon kernel: [ 2633.536453]   task                        PC stack   pid father
Nov 20 19:35:16 thoregon kernel: [ 2633.536485] kswapd0         D 0000000000000001     0   725      2 0x00000000
Nov 20 19:35:16 thoregon kernel: [ 2633.536496]  ffff8803280d13f8 0000000000000046 ffff880329a0ab80 ffff8803280d1fd8
Nov 20 19:35:16 thoregon kernel: [ 2633.536507]  ffff8803280d1fd8 ffff8803280d1fd8 ffff880046b7c880 ffff880329a0ab80
Nov 20 19:35:16 thoregon kernel: [ 2633.536517]  ffff8803280d1408 ffff8803278dbbd0 ffff8803278db800 00000000ffffffff
Nov 20 19:35:16 thoregon kernel: [ 2633.536526] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.536542]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.536553]  [<ffffffff814f9dad>] md_super_wait+0x4d/0x80
Nov 20 19:35:16 thoregon kernel: [ 2633.536563]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.536571]  [<ffffffff81500753>] bitmap_unplug+0x173/0x180
Nov 20 19:35:16 thoregon kernel: [ 2633.536581]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.536589]  [<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
Nov 20 19:35:16 thoregon kernel: [ 2633.536599]  [<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
Nov 20 19:35:16 thoregon kernel: [ 2633.536607]  [<ffffffff816b24ca>] ? _raw_spin_unlock_irqrestore+0x3a/0x70
Nov 20 19:35:16 thoregon kernel: [ 2633.536616]  [<ffffffff816b15c3>] io_schedule_timeout+0x83/0xf0
Nov 20 19:35:16 thoregon kernel: [ 2633.536623]  [<ffffffff810815cd>] ? trace_hardirqs_on+0xd/0x10
Nov 20 19:35:16 thoregon kernel: [ 2633.536633]  [<ffffffff810b0e1d>] mempool_alloc+0x12d/0x160
Nov 20 19:35:16 thoregon kernel: [ 2633.536641]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.536650]  [<ffffffff811263da>] bvec_alloc_bs+0xda/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.536658]  [<ffffffff811264ea>] bio_alloc_bioset+0xea/0x110
Nov 20 19:35:16 thoregon kernel: [ 2633.536666]  [<ffffffff81126656>] bio_clone_bioset+0x16/0x40
Nov 20 19:35:16 thoregon kernel: [ 2633.536673]  [<ffffffff814f471a>] bio_clone_mddev+0x1a/0x30
Nov 20 19:35:16 thoregon kernel: [ 2633.536681]  [<ffffffff814edbb1>] make_request+0x551/0xde0
Nov 20 19:35:16 thoregon kernel: [ 2633.536689]  [<ffffffff814ed9e0>] ? make_request+0x380/0xde0
Nov 20 19:35:16 thoregon kernel: [ 2633.536697]  [<ffffffff810b0b80>] ? mempool_alloc_slab+0x10/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.536706]  [<ffffffff814f80bb>] md_make_request+0x21b/0x4d0
Nov 20 19:35:16 thoregon kernel: [ 2633.536712]  [<ffffffff814f7ef0>] ? md_make_request+0x50/0x4d0
Nov 20 19:35:16 thoregon kernel: [ 2633.536722]  [<ffffffff81276e52>] generic_make_request+0xc2/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.536730]  [<ffffffff81276ef5>] submit_bio+0x65/0x110
Nov 20 19:35:16 thoregon kernel: [ 2633.536740]  [<ffffffff811e07bf>] xfs_submit_ioend_bio.isra.21+0x2f/0x40
Nov 20 19:35:16 thoregon kernel: [ 2633.536748]  [<ffffffff811e088e>] xfs_submit_ioend+0xbe/0x110
Nov 20 19:35:16 thoregon kernel: [ 2633.536757]  [<ffffffff811e0c91>] xfs_vm_writepage+0x3b1/0x540
Nov 20 19:35:16 thoregon kernel: [ 2633.536767]  [<ffffffff810bcd84>] shrink_page_list+0x564/0x890
Nov 20 19:35:16 thoregon kernel: [ 2633.536777]  [<ffffffff810bd637>] shrink_inactive_list+0x1d7/0x310
Nov 20 19:35:16 thoregon kernel: [ 2633.536785]  [<ffffffff810bdb9d>] shrink_lruvec+0x42d/0x530
Nov 20 19:35:16 thoregon kernel: [ 2633.536795]  [<ffffffff810be323>] kswapd+0x683/0xa20
Nov 20 19:35:16 thoregon kernel: [ 2633.536804]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.536812]  [<ffffffff810bdca0>] ? shrink_lruvec+0x530/0x530
Nov 20 19:35:16 thoregon kernel: [ 2633.536819]  [<ffffffff8105c246>] kthread+0xd6/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.536826]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.536835]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.536842]  [<ffffffff816b31ac>] ret_from_fork+0x7c/0xb0
Nov 20 19:35:16 thoregon kernel: [ 2633.536849]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.536890] xfsaild/md4     D 0000000000000003     0  1742      2 0x00000000
Nov 20 19:35:16 thoregon kernel: [ 2633.536898]  ffff88032438bb68 0000000000000046 ffff880329965700 ffff88032438bfd8
Nov 20 19:35:16 thoregon kernel: [ 2633.536908]  ffff88032438bfd8 ffff88032438bfd8 ffff88032827e580 ffff880329965700
Nov 20 19:35:16 thoregon kernel: [ 2633.536917]  ffff88032438bb78 ffff8803278dbbd0 ffff8803278db800 00000000ffffffff
Nov 20 19:35:16 thoregon kernel: [ 2633.536926] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.536935]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.536942]  [<ffffffff814f9dad>] md_super_wait+0x4d/0x80
Nov 20 19:35:16 thoregon kernel: [ 2633.536950]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.536957]  [<ffffffff81500753>] bitmap_unplug+0x173/0x180
Nov 20 19:35:16 thoregon kernel: [ 2633.536966]  [<ffffffff81278c13>] ? blk_finish_plug+0x13/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.536974]  [<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
Nov 20 19:35:16 thoregon kernel: [ 2633.536982]  [<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
Nov 20 19:35:16 thoregon kernel: [ 2633.536991]  [<ffffffff81278c13>] blk_finish_plug+0x13/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.536998]  [<ffffffff811e296a>] __xfs_buf_delwri_submit+0x1ca/0x1e0
Nov 20 19:35:16 thoregon kernel: [ 2633.537006]  [<ffffffff811e2ffb>] xfs_buf_delwri_submit_nowait+0x1b/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.537014]  [<ffffffff81233066>] xfsaild+0x226/0x4c0
Nov 20 19:35:16 thoregon kernel: [ 2633.537022]  [<ffffffff81065dfa>] ? finish_task_switch+0x3a/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.537030]  [<ffffffff81232e40>] ? xfs_trans_ail_cursor_first+0xa0/0xa0
Nov 20 19:35:16 thoregon kernel: [ 2633.537037]  [<ffffffff8105c246>] kthread+0xd6/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.537044]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.537053]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.537060]  [<ffffffff816b31ac>] ret_from_fork+0x7c/0xb0
Nov 20 19:35:16 thoregon kernel: [ 2633.537067]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.537158] plasma-desktop  D 0000000000000001     0  3195      1 0x00000000
Nov 20 19:35:16 thoregon kernel: [ 2633.537165]  ffff88030b99d448 0000000000000046 ffff8803266c1d00 ffff88030b99dfd8
Nov 20 19:35:16 thoregon kernel: [ 2633.537174]  ffff88030b99dfd8 ffff88030b99dfd8 ffff880046b78e80 ffff8803266c1d00
Nov 20 19:35:16 thoregon kernel: [ 2633.537183]  0000000000000282 ffff88030b99d468 0000000100038f76 0000000100038f74
Nov 20 19:35:16 thoregon kernel: [ 2633.537192] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.537201]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.537210]  [<ffffffff816aed8e>] schedule_timeout+0x12e/0x1c0
Nov 20 19:35:16 thoregon kernel: [ 2633.537219]  [<ffffffff81048730>] ? call_timer_fn+0xf0/0xf0
Nov 20 19:35:16 thoregon kernel: [ 2633.537228]  [<ffffffff816aee39>] schedule_timeout_uninterruptible+0x19/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.537235]  [<ffffffff811f1f99>] xfs_reclaim_inode+0x129/0x340
Nov 20 19:35:16 thoregon kernel: [ 2633.537242]  [<ffffffff811f2d6f>] xfs_reclaim_inodes_ag+0x2bf/0x4f0
Nov 20 19:35:16 thoregon kernel: [ 2633.537249]  [<ffffffff811f2b90>] ? xfs_reclaim_inodes_ag+0xe0/0x4f0
Nov 20 19:35:16 thoregon kernel: [ 2633.537258]  [<ffffffff811f30de>] xfs_reclaim_inodes_nr+0x2e/0x40
Nov 20 19:35:16 thoregon kernel: [ 2633.537267]  [<ffffffff811ef9b0>] xfs_fs_free_cached_objects+0x10/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.537274]  [<ffffffff810f5b43>] prune_super+0x113/0x1a0
Nov 20 19:35:16 thoregon kernel: [ 2633.537282]  [<ffffffff810bbd4e>] shrink_slab+0x11e/0x1f0
Nov 20 19:35:16 thoregon kernel: [ 2633.537290]  [<ffffffff810b4fa5>] ? drain_pages+0x95/0xc0
Nov 20 19:35:16 thoregon kernel: [ 2633.537298]  [<ffffffff810be8df>] try_to_free_pages+0x21f/0x4e0
Nov 20 19:35:16 thoregon kernel: [ 2633.537306]  [<ffffffff810b5eb6>] __alloc_pages_nodemask+0x506/0x800
Nov 20 19:35:16 thoregon kernel: [ 2633.537316]  [<ffffffff81570ed3>] ? __alloc_skb+0x83/0x280
Nov 20 19:35:16 thoregon kernel: [ 2633.537323]  [<ffffffff810b6232>] __get_free_pages+0x12/0x40
Nov 20 19:35:16 thoregon kernel: [ 2633.537331]  [<ffffffff810ebd8c>] __kmalloc_track_caller+0x14c/0x170
Nov 20 19:35:16 thoregon kernel: [ 2633.537339]  [<ffffffff815703e6>] __kmalloc_reserve+0x36/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.537346]  [<ffffffff81570ed3>] __alloc_skb+0x83/0x280
Nov 20 19:35:16 thoregon kernel: [ 2633.537354]  [<ffffffff8105c905>] ? remove_wait_queue+0x55/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.537361]  [<ffffffff81569fb1>] sock_alloc_send_pskb+0x251/0x3e0
Nov 20 19:35:16 thoregon kernel: [ 2633.537369]  [<ffffffff8156a150>] sock_alloc_send_skb+0x10/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.537376]  [<ffffffff81615f21>] unix_stream_sendmsg+0x2e1/0x420
Nov 20 19:35:16 thoregon kernel: [ 2633.537385]  [<ffffffff81564900>] ? sock_aio_read+0x30/0x30
Nov 20 19:35:16 thoregon kernel: [ 2633.537393]  [<ffffffff81564a0c>] sock_aio_write+0x10c/0x140
Nov 20 19:35:16 thoregon kernel: [ 2633.537403]  [<ffffffff810f2eeb>] do_sync_readv_writev+0x9b/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.537413]  [<ffffffff810f31a8>] do_readv_writev+0xc8/0x1c0
Nov 20 19:35:16 thoregon kernel: [ 2633.537421]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.537428]  [<ffffffff8107c899>] ? __lock_is_held+0x59/0x70
Nov 20 19:35:16 thoregon kernel: [ 2633.537438]  [<ffffffff8110f892>] ? fget_light+0x42/0x4b0
Nov 20 19:35:16 thoregon kernel: [ 2633.537446]  [<ffffffff810f32d7>] vfs_writev+0x37/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.537454]  [<ffffffff810f341d>] sys_writev+0x4d/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.537462]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:35:16 thoregon kernel: [ 2633.537518] ssh             D 0000000000000000     0  3573      1 0x00000004
Nov 20 19:35:16 thoregon kernel: [ 2633.537525]  ffff8802c9f2f508 0000000000000046 ffff8802f04dd700 ffff8802c9f2ffd8
Nov 20 19:35:16 thoregon kernel: [ 2633.537534]  ffff8802c9f2ffd8 ffff8802c9f2ffd8 ffffffff81c13420 ffff8802f04dd700
Nov 20 19:35:16 thoregon kernel: [ 2633.537543]  0000000000000282 ffff8802c9f2f528 0000000100038f75 0000000100038f73
Nov 20 19:35:16 thoregon kernel: [ 2633.537552] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.537561]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.537569]  [<ffffffff816aed8e>] schedule_timeout+0x12e/0x1c0
Nov 20 19:35:16 thoregon kernel: [ 2633.537578]  [<ffffffff81048730>] ? call_timer_fn+0xf0/0xf0
Nov 20 19:35:16 thoregon kernel: [ 2633.537587]  [<ffffffff816aee39>] schedule_timeout_uninterruptible+0x19/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.537594]  [<ffffffff811f1f99>] xfs_reclaim_inode+0x129/0x340
Nov 20 19:35:16 thoregon kernel: [ 2633.537601]  [<ffffffff811f2d6f>] xfs_reclaim_inodes_ag+0x2bf/0x4f0
Nov 20 19:35:16 thoregon kernel: [ 2633.537607]  [<ffffffff811f2b90>] ? xfs_reclaim_inodes_ag+0xe0/0x4f0
Nov 20 19:35:16 thoregon kernel: [ 2633.537616]  [<ffffffff811f30de>] xfs_reclaim_inodes_nr+0x2e/0x40
Nov 20 19:35:16 thoregon kernel: [ 2633.537625]  [<ffffffff811ef9b0>] xfs_fs_free_cached_objects+0x10/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.537631]  [<ffffffff810f5b43>] prune_super+0x113/0x1a0
Nov 20 19:35:16 thoregon kernel: [ 2633.537639]  [<ffffffff810bbd4e>] shrink_slab+0x11e/0x1f0
Nov 20 19:35:16 thoregon kernel: [ 2633.537647]  [<ffffffff810b4fa5>] ? drain_pages+0x95/0xc0
Nov 20 19:35:16 thoregon kernel: [ 2633.537655]  [<ffffffff810be8df>] try_to_free_pages+0x21f/0x4e0
Nov 20 19:35:16 thoregon kernel: [ 2633.537663]  [<ffffffff810b5eb6>] __alloc_pages_nodemask+0x506/0x800
Nov 20 19:35:16 thoregon kernel: [ 2633.537670]  [<ffffffff81104d10>] ? poll_select_set_timeout+0x90/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.537679]  [<ffffffff81570ed3>] ? __alloc_skb+0x83/0x280
Nov 20 19:35:16 thoregon kernel: [ 2633.537686]  [<ffffffff810b6232>] __get_free_pages+0x12/0x40
Nov 20 19:35:16 thoregon kernel: [ 2633.537693]  [<ffffffff810ebd8c>] __kmalloc_track_caller+0x14c/0x170
Nov 20 19:35:16 thoregon kernel: [ 2633.537701]  [<ffffffff815703e6>] __kmalloc_reserve+0x36/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.537708]  [<ffffffff81570ed3>] __alloc_skb+0x83/0x280
Nov 20 19:35:16 thoregon kernel: [ 2633.537715]  [<ffffffff81569ac3>] ? release_sock+0x183/0x1e0
Nov 20 19:35:16 thoregon kernel: [ 2633.537722]  [<ffffffff81569fb1>] sock_alloc_send_pskb+0x251/0x3e0
Nov 20 19:35:16 thoregon kernel: [ 2633.537729]  [<ffffffff8156a150>] sock_alloc_send_skb+0x10/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.537736]  [<ffffffff81615f21>] unix_stream_sendmsg+0x2e1/0x420
Nov 20 19:35:16 thoregon kernel: [ 2633.537745]  [<ffffffff81564a0c>] sock_aio_write+0x10c/0x140
Nov 20 19:35:16 thoregon kernel: [ 2633.537753]  [<ffffffff810cb5ae>] ? might_fault+0x4e/0xa0
Nov 20 19:35:16 thoregon kernel: [ 2633.537760]  [<ffffffff811059aa>] ? core_sys_select+0x47a/0x4d0
Nov 20 19:35:16 thoregon kernel: [ 2633.537769]  [<ffffffff810f238b>] do_sync_write+0x9b/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.537778]  [<ffffffff810f2a75>] vfs_write+0x145/0x160
Nov 20 19:35:16 thoregon kernel: [ 2633.537787]  [<ffffffff810f2ccd>] sys_write+0x4d/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.537794]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:35:16 thoregon kernel: [ 2633.537801] bash            D 0000000000000000     0 28378  28334 0x00000000
Nov 20 19:35:16 thoregon kernel: [ 2633.537809]  ffff880299f5d5d8 0000000000000046 ffff880326a83a00 ffff880299f5dfd8
Nov 20 19:35:16 thoregon kernel: [ 2633.537818]  ffff880299f5dfd8 ffff880299f5dfd8 ffff88032989ba00 ffff880326a83a00
Nov 20 19:35:16 thoregon kernel: [ 2633.537827]  ffff880299f5d6a8 ffff880299c50230 7fffffffffffffff ffff880326a83a00
Nov 20 19:35:16 thoregon kernel: [ 2633.537836] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.537845]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.537853]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:35:16 thoregon kernel: [ 2633.537860]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.537868]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.537876]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:35:16 thoregon kernel: [ 2633.537885]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:35:16 thoregon kernel: [ 2633.537891]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.537898]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.537904]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:35:16 thoregon kernel: [ 2633.537911]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:35:16 thoregon kernel: [ 2633.537918]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.537925]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:35:16 thoregon kernel: [ 2633.537933]  [<ffffffff811f73fb>] xfs_alloc_read_agfl+0x6b/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.537941]  [<ffffffff811f986d>] ? xfs_alloc_read_agf+0xcd/0xf0
Nov 20 19:35:16 thoregon kernel: [ 2633.537948]  [<ffffffff810e9a93>] ? kmem_cache_free+0x83/0x150
Nov 20 19:35:16 thoregon kernel: [ 2633.537955]  [<ffffffff811f9ac1>] xfs_alloc_fix_freelist+0x231/0x480
Nov 20 19:35:16 thoregon kernel: [ 2633.537962]  [<ffffffff8107c899>] ? __lock_is_held+0x59/0x70
Nov 20 19:35:16 thoregon kernel: [ 2633.537971]  [<ffffffff811fa279>] xfs_free_extent+0x99/0x120
Nov 20 19:35:16 thoregon kernel: [ 2633.537978]  [<ffffffff811f6fff>] ? kmem_zone_alloc+0x5f/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.537987]  [<ffffffff81206edc>] xfs_bmap_finish+0x15c/0x1a0
Nov 20 19:35:16 thoregon kernel: [ 2633.537996]  [<ffffffff8121f9eb>] xfs_itruncate_extents+0xdb/0x200
Nov 20 19:35:16 thoregon kernel: [ 2633.538006]  [<ffffffff811ed7d5>] xfs_setattr_size+0x355/0x3b0
Nov 20 19:35:16 thoregon kernel: [ 2633.538014]  [<ffffffff811ed85e>] xfs_vn_setattr+0x2e/0x40
Nov 20 19:35:16 thoregon kernel: [ 2633.538022]  [<ffffffff8110dfa6>] notify_change+0x186/0x3a0
Nov 20 19:35:16 thoregon kernel: [ 2633.538030]  [<ffffffff810f1229>] do_truncate+0x59/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.538039]  [<ffffffff810fd777>] ? __inode_permission+0x27/0x80
Nov 20 19:35:16 thoregon kernel: [ 2633.538048]  [<ffffffff811008da>] do_last.isra.69+0x60a/0xc80
Nov 20 19:35:16 thoregon kernel: [ 2633.538056]  [<ffffffff810fd7e3>] ? inode_permission+0x13/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.538064]  [<ffffffff810fd882>] ? link_path_walk+0x62/0x910
Nov 20 19:35:16 thoregon kernel: [ 2633.538073]  [<ffffffff81100ffb>] path_openat.isra.70+0xab/0x490
Nov 20 19:35:16 thoregon kernel: [ 2633.538080]  [<ffffffff8107c899>] ? __lock_is_held+0x59/0x70
Nov 20 19:35:16 thoregon kernel: [ 2633.538089]  [<ffffffff8110179d>] do_filp_open+0x3d/0xa0
Nov 20 19:35:16 thoregon kernel: [ 2633.538097]  [<ffffffff8111082b>] ? __alloc_fd+0x16b/0x210
Nov 20 19:35:16 thoregon kernel: [ 2633.538106]  [<ffffffff810f2089>] do_sys_open+0xf9/0x1e0
Nov 20 19:35:16 thoregon kernel: [ 2633.538114]  [<ffffffff810f218c>] sys_open+0x1c/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.538121]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:35:16 thoregon kernel: [ 2633.538126] flush-9:4       D 0000000000000001     0 30365      2 0x00000000
Nov 20 19:35:16 thoregon kernel: [ 2633.538134]  ffff8801d8ce1978 0000000000000046 ffff880328238e80 ffff8801d8ce1fd8
Nov 20 19:35:16 thoregon kernel: [ 2633.538143]  ffff8801d8ce1fd8 ffff8801d8ce1fd8 ffff8803032cc880 ffff880328238e80
Nov 20 19:35:16 thoregon kernel: [ 2633.538151]  ffff8801d8ce1988 ffff8803278dbbd0 ffff8803278db800 00000000ffffffff
Nov 20 19:35:16 thoregon kernel: [ 2633.538160] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.538169]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538176]  [<ffffffff814f9dad>] md_super_wait+0x4d/0x80
Nov 20 19:35:16 thoregon kernel: [ 2633.538184]  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538192]  [<ffffffff81500753>] bitmap_unplug+0x173/0x180
Nov 20 19:35:16 thoregon kernel: [ 2633.538199]  [<ffffffff810b6acf>] ? write_cache_pages+0x12f/0x420
Nov 20 19:35:16 thoregon kernel: [ 2633.538206]  [<ffffffff810b6700>] ? set_page_dirty_lock+0x60/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538214]  [<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
Nov 20 19:35:16 thoregon kernel: [ 2633.538222]  [<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
Nov 20 19:35:16 thoregon kernel: [ 2633.538231]  [<ffffffff81278c13>] blk_finish_plug+0x13/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.538238]  [<ffffffff810b6e0a>] generic_writepages+0x4a/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538247]  [<ffffffff811df898>] xfs_vm_writepages+0x48/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538256]  [<ffffffff8111b144>] ? writeback_sb_inodes+0x174/0x410
Nov 20 19:35:16 thoregon kernel: [ 2633.538264]  [<ffffffff810b84be>] do_writepages+0x1e/0x30
Nov 20 19:35:16 thoregon kernel: [ 2633.538272]  [<ffffffff8111a1ee>] __writeback_single_inode+0x3e/0x120
Nov 20 19:35:16 thoregon kernel: [ 2633.538280]  [<ffffffff8111b234>] writeback_sb_inodes+0x264/0x410
Nov 20 19:35:16 thoregon kernel: [ 2633.538289]  [<ffffffff8111b476>] __writeback_inodes_wb+0x96/0xc0
Nov 20 19:35:16 thoregon kernel: [ 2633.538298]  [<ffffffff8111b673>] wb_writeback+0x1d3/0x1e0
Nov 20 19:35:16 thoregon kernel: [ 2633.538305]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.538313]  [<ffffffff8111bb1c>] ? wb_do_writeback+0x7c/0x140
Nov 20 19:35:16 thoregon kernel: [ 2633.538322]  [<ffffffff8111bb40>] wb_do_writeback+0xa0/0x140
Nov 20 19:35:16 thoregon kernel: [ 2633.538331]  [<ffffffff8111bc52>] bdi_writeback_thread+0x72/0x150
Nov 20 19:35:16 thoregon kernel: [ 2633.538339]  [<ffffffff8111bbe0>] ? wb_do_writeback+0x140/0x140
Nov 20 19:35:16 thoregon kernel: [ 2633.538346]  [<ffffffff8105c246>] kthread+0xd6/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.538353]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.538361]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.538369]  [<ffffffff816b31ac>] ret_from_fork+0x7c/0xb0
Nov 20 19:35:16 thoregon kernel: [ 2633.538376]  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
Nov 20 19:35:16 thoregon kernel: [ 2633.538384] x86_64-pc-linux D 0000000000000000     0 26516      1 0x00000004
Nov 20 19:35:16 thoregon kernel: [ 2633.538391]  ffff8800468dddf8 0000000000000046 ffff880329303a00 ffff8800468ddfd8
Nov 20 19:35:16 thoregon kernel: [ 2633.538400]  ffff8800468ddfd8 ffff8800468ddfd8 ffff88032989c880 ffff880329303a00
Nov 20 19:35:16 thoregon kernel: [ 2633.538409]  0000000000000246 ffff8800468dc000 ffff8801b1a1bc68 0000000002405150
Nov 20 19:35:16 thoregon kernel: [ 2633.538418] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.538427]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538436]  [<ffffffff816b1501>] schedule_preempt_disabled+0x21/0x30
Nov 20 19:35:16 thoregon kernel: [ 2633.538444]  [<ffffffff816af3a5>] mutex_lock_nested+0x165/0x330
Nov 20 19:35:16 thoregon kernel: [ 2633.538452]  [<ffffffff810ff5e3>] ? do_unlinkat+0xa3/0x1b0
Nov 20 19:35:16 thoregon kernel: [ 2633.538461]  [<ffffffff810fefae>] ? filename_lookup.isra.57+0x2e/0x80
Nov 20 19:35:16 thoregon kernel: [ 2633.538469]  [<ffffffff810ff5e3>] do_unlinkat+0xa3/0x1b0
Nov 20 19:35:16 thoregon kernel: [ 2633.538477]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.538485]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:35:16 thoregon kernel: [ 2633.538494]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.538501]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:35:16 thoregon kernel: [ 2633.538505] x86_64-pc-linux D 0000000000000000     0 26523      1 0x00000004
Nov 20 19:35:16 thoregon kernel: [ 2633.538512]  ffff8800c0def858 0000000000000046 ffff88032986c880 ffff8800c0deffd8
Nov 20 19:35:16 thoregon kernel: [ 2633.538521]  ffff8800c0deffd8 ffff8800c0deffd8 ffff88032989e580 ffff88032986c880
Nov 20 19:35:16 thoregon kernel: [ 2633.538530]  ffff8800c0def928 ffff8801c57e3a30 7fffffffffffffff ffff88032986c880
Nov 20 19:35:16 thoregon kernel: [ 2633.538539] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.538548]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538556]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:35:16 thoregon kernel: [ 2633.538563]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.538571]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.538579]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:35:16 thoregon kernel: [ 2633.538588]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:35:16 thoregon kernel: [ 2633.538594]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.538601]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538607]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:35:16 thoregon kernel: [ 2633.538614]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:35:16 thoregon kernel: [ 2633.538621]  [<ffffffff81210417>] ? xfs_dabuf_map.isra.23+0xb7/0x2f0
Nov 20 19:35:16 thoregon kernel: [ 2633.538628]  [<ffffffff810b572e>] ? get_page_from_freelist+0x1de/0x460
Nov 20 19:35:16 thoregon kernel: [ 2633.538635]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.538642]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:35:16 thoregon kernel: [ 2633.538650]  [<ffffffff812116b6>] xfs_da_read_buf+0xb6/0x180
Nov 20 19:35:16 thoregon kernel: [ 2633.538659]  [<ffffffff8121724c>] xfs_dir2_leaf_lookup_int+0x5c/0x2c0
Nov 20 19:35:16 thoregon kernel: [ 2633.538668]  [<ffffffff81217583>] xfs_dir2_leaf_removename+0x33/0x330
Nov 20 19:35:16 thoregon kernel: [ 2633.538676]  [<ffffffff8120777b>] ? xfs_bmap_last_offset+0x8b/0xc0
Nov 20 19:35:16 thoregon kernel: [ 2633.538684]  [<ffffffff81213b94>] xfs_dir_removename+0x134/0x150
Nov 20 19:35:16 thoregon kernel: [ 2633.538692]  [<ffffffff811f5ce6>] xfs_remove+0x1f6/0x370
Nov 20 19:35:16 thoregon kernel: [ 2633.538701]  [<ffffffff816af4ac>] ? mutex_lock_nested+0x26c/0x330
Nov 20 19:35:16 thoregon kernel: [ 2633.538709]  [<ffffffff810ff48f>] ? vfs_unlink+0x4f/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.538717]  [<ffffffff811ec7f3>] xfs_vn_unlink+0x43/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.538725]  [<ffffffff810ff4cd>] vfs_unlink+0x8d/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.538733]  [<ffffffff810ff63a>] do_unlinkat+0xfa/0x1b0
Nov 20 19:35:16 thoregon kernel: [ 2633.538741]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.538748]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:35:16 thoregon kernel: [ 2633.538757]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.538764]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:35:16 thoregon kernel: [ 2633.538769] x86_64-pc-linux D 0000000000000000     0 26875      1 0x00000004
Nov 20 19:35:16 thoregon kernel: [ 2633.538776]  ffff8800c0c5ba08 0000000000000046 ffff8802f0594880 ffff8800c0c5bfd8
Nov 20 19:35:16 thoregon kernel: [ 2633.538785]  ffff8800c0c5bfd8 ffff8800c0c5bfd8 ffff88032989c880 ffff8802f0594880
Nov 20 19:35:16 thoregon kernel: [ 2633.538793]  0000000000000250 ffff880205a7aa30 7fffffffffffffff ffff8802f0594880
Nov 20 19:35:16 thoregon kernel: [ 2633.538802] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.538811]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538819]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:35:16 thoregon kernel: [ 2633.538826]  [<ffffffff811f70b4>] ? kmem_zone_zalloc+0x34/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.538833]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.538841]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.538849]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:35:16 thoregon kernel: [ 2633.538857]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:35:16 thoregon kernel: [ 2633.538864]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.538870]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538877]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:35:16 thoregon kernel: [ 2633.538883]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:35:16 thoregon kernel: [ 2633.538890]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.538897]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:35:16 thoregon kernel: [ 2633.538906]  [<ffffffff8121ce6f>] xfs_read_agi+0x7f/0x110
Nov 20 19:35:16 thoregon kernel: [ 2633.538915]  [<ffffffff8121fb55>] xfs_iunlink+0x45/0x150
Nov 20 19:35:16 thoregon kernel: [ 2633.538922]  [<ffffffff81041fc1>] ? current_fs_time+0x11/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.538929]  [<ffffffff81234352>] ? xfs_trans_ichgtime+0x22/0xa0
Nov 20 19:35:16 thoregon kernel: [ 2633.538936]  [<ffffffff811f3ea0>] xfs_droplink+0x50/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.538943]  [<ffffffff811f5d77>] xfs_remove+0x287/0x370
Nov 20 19:35:16 thoregon kernel: [ 2633.538951]  [<ffffffff816af4ac>] ? mutex_lock_nested+0x26c/0x330
Nov 20 19:35:16 thoregon kernel: [ 2633.538959]  [<ffffffff810ff48f>] ? vfs_unlink+0x4f/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.538967]  [<ffffffff811ec7f3>] xfs_vn_unlink+0x43/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.538975]  [<ffffffff810ff4cd>] vfs_unlink+0x8d/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.538983]  [<ffffffff810ff63a>] do_unlinkat+0xfa/0x1b0
Nov 20 19:35:16 thoregon kernel: [ 2633.538991]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.538999]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:35:16 thoregon kernel: [ 2633.539007]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.539014]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:35:16 thoregon kernel: [ 2633.539020] x86_64-pc-linux D 0000000000000001     0 26897      1 0x00000004
Nov 20 19:35:16 thoregon kernel: [ 2633.539027]  ffff880101959858 0000000000000046 ffff88032934ab80 ffff880101959fd8
Nov 20 19:35:16 thoregon kernel: [ 2633.539036]  ffff880101959fd8 ffff880101959fd8 ffff880162a3e580 ffff88032934ab80
Nov 20 19:35:16 thoregon kernel: [ 2633.539045]  ffff880101959928 ffff8801ca545030 7fffffffffffffff ffff88032934ab80
Nov 20 19:35:16 thoregon kernel: [ 2633.539053] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.539062]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.539070]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:35:16 thoregon kernel: [ 2633.539077]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.539085]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.539094]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:35:16 thoregon kernel: [ 2633.539102]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:35:16 thoregon kernel: [ 2633.539108]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.539115]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.539121]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:35:16 thoregon kernel: [ 2633.539128]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:35:16 thoregon kernel: [ 2633.539135]  [<ffffffff81210417>] ? xfs_dabuf_map.isra.23+0xb7/0x2f0
Nov 20 19:35:16 thoregon kernel: [ 2633.539142]  [<ffffffff810b572e>] ? get_page_from_freelist+0x1de/0x460
Nov 20 19:35:16 thoregon kernel: [ 2633.539149]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.539156]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:35:16 thoregon kernel: [ 2633.539164]  [<ffffffff812116b6>] xfs_da_read_buf+0xb6/0x180
Nov 20 19:35:16 thoregon kernel: [ 2633.539172]  [<ffffffff8121724c>] xfs_dir2_leaf_lookup_int+0x5c/0x2c0
Nov 20 19:35:16 thoregon kernel: [ 2633.539181]  [<ffffffff81217583>] xfs_dir2_leaf_removename+0x33/0x330
Nov 20 19:35:16 thoregon kernel: [ 2633.539189]  [<ffffffff8120777b>] ? xfs_bmap_last_offset+0x8b/0xc0
Nov 20 19:35:16 thoregon kernel: [ 2633.539197]  [<ffffffff81213b94>] xfs_dir_removename+0x134/0x150
Nov 20 19:35:16 thoregon kernel: [ 2633.539205]  [<ffffffff811f5ce6>] xfs_remove+0x1f6/0x370
Nov 20 19:35:16 thoregon kernel: [ 2633.539214]  [<ffffffff816af4ac>] ? mutex_lock_nested+0x26c/0x330
Nov 20 19:35:16 thoregon kernel: [ 2633.539222]  [<ffffffff810ff48f>] ? vfs_unlink+0x4f/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.539230]  [<ffffffff811ec7f3>] xfs_vn_unlink+0x43/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.539238]  [<ffffffff810ff4cd>] vfs_unlink+0x8d/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.539246]  [<ffffffff810ff63a>] do_unlinkat+0xfa/0x1b0
Nov 20 19:35:16 thoregon kernel: [ 2633.539254]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.539261]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:35:16 thoregon kernel: [ 2633.539270]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.539277]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:35:16 thoregon kernel: [ 2633.539282] x86_64-pc-linux D 0000000000000001     0 26904      1 0x00000004
Nov 20 19:35:16 thoregon kernel: [ 2633.539290]  ffff880066bff858 0000000000000046 ffff880329301d00 ffff880066bfffd8
Nov 20 19:35:16 thoregon kernel: [ 2633.539299]  ffff880066bfffd8 ffff880066bfffd8 ffff88013a7ce580 ffff880329301d00
Nov 20 19:35:16 thoregon kernel: [ 2633.539307]  ffff880066bff928 ffff8801c572c830 7fffffffffffffff ffff880329301d00
Nov 20 19:35:16 thoregon kernel: [ 2633.539316] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.539325]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.539333]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:35:16 thoregon kernel: [ 2633.539340]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.539348]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.539356]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:35:16 thoregon kernel: [ 2633.539365]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:35:16 thoregon kernel: [ 2633.539371]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.539377]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.539384]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:35:16 thoregon kernel: [ 2633.539390]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:35:16 thoregon kernel: [ 2633.539398]  [<ffffffff81210417>] ? xfs_dabuf_map.isra.23+0xb7/0x2f0
Nov 20 19:35:16 thoregon kernel: [ 2633.539405]  [<ffffffff810b572e>] ? get_page_from_freelist+0x1de/0x460
Nov 20 19:35:16 thoregon kernel: [ 2633.539412]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.539419]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:35:16 thoregon kernel: [ 2633.539426]  [<ffffffff812116b6>] xfs_da_read_buf+0xb6/0x180
Nov 20 19:35:16 thoregon kernel: [ 2633.539435]  [<ffffffff8121724c>] xfs_dir2_leaf_lookup_int+0x5c/0x2c0
Nov 20 19:35:16 thoregon kernel: [ 2633.539444]  [<ffffffff81217583>] xfs_dir2_leaf_removename+0x33/0x330
Nov 20 19:35:16 thoregon kernel: [ 2633.539452]  [<ffffffff8120777b>] ? xfs_bmap_last_offset+0x8b/0xc0
Nov 20 19:35:16 thoregon kernel: [ 2633.539460]  [<ffffffff81213b94>] xfs_dir_removename+0x134/0x150
Nov 20 19:35:16 thoregon kernel: [ 2633.539468]  [<ffffffff811f5ce6>] xfs_remove+0x1f6/0x370
Nov 20 19:35:16 thoregon kernel: [ 2633.539477]  [<ffffffff816af4ac>] ? mutex_lock_nested+0x26c/0x330
Nov 20 19:35:16 thoregon kernel: [ 2633.539484]  [<ffffffff810ff48f>] ? vfs_unlink+0x4f/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.539493]  [<ffffffff811ec7f3>] xfs_vn_unlink+0x43/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.539501]  [<ffffffff810ff4cd>] vfs_unlink+0x8d/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.539509]  [<ffffffff810ff63a>] do_unlinkat+0xfa/0x1b0
Nov 20 19:35:16 thoregon kernel: [ 2633.539517]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.539524]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:35:16 thoregon kernel: [ 2633.539532]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.539540]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b
Nov 20 19:35:16 thoregon kernel: [ 2633.539544] x86_64-pc-linux D 0000000000000000     0 26907      1 0x00000004
Nov 20 19:35:16 thoregon kernel: [ 2633.539552]  ffff8800c0eed858 0000000000000046 ffff8802a7be6580 ffff8800c0eedfd8
Nov 20 19:35:16 thoregon kernel: [ 2633.539560]  ffff8800c0eedfd8 ffff8800c0eedfd8 ffff88032989e580 ffff8802a7be6580
Nov 20 19:35:16 thoregon kernel: [ 2633.539569]  ffff8800c0eed928 ffff8801c55c7630 7fffffffffffffff ffff8802a7be6580
Nov 20 19:35:16 thoregon kernel: [ 2633.539578] Call Trace:
Nov 20 19:35:16 thoregon kernel: [ 2633.539587]  [<ffffffff816b1224>] schedule+0x24/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.539595]  [<ffffffff816aeddd>] schedule_timeout+0x17d/0x1c0
Nov 20 19:35:16 thoregon kernel: [ 2633.539602]  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.539610]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.539618]  [<ffffffff816b057e>] __down_common+0x97/0xe8
Nov 20 19:35:16 thoregon kernel: [ 2633.539627]  [<ffffffff816b062e>] __down+0x18/0x1a
Nov 20 19:35:16 thoregon kernel: [ 2633.539633]  [<ffffffff8106245b>] down+0x3b/0x50
Nov 20 19:35:16 thoregon kernel: [ 2633.539639]  [<ffffffff811e21b5>] xfs_buf_lock+0x25/0x60
Nov 20 19:35:16 thoregon kernel: [ 2633.539646]  [<ffffffff811e22e2>] _xfs_buf_find+0xf2/0x200
Nov 20 19:35:16 thoregon kernel: [ 2633.539652]  [<ffffffff811e241f>] xfs_buf_get_map+0x2f/0x130
Nov 20 19:35:16 thoregon kernel: [ 2633.539660]  [<ffffffff81210417>] ? xfs_dabuf_map.isra.23+0xb7/0x2f0
Nov 20 19:35:16 thoregon kernel: [ 2633.539667]  [<ffffffff810b572e>] ? get_page_from_freelist+0x1de/0x460
Nov 20 19:35:16 thoregon kernel: [ 2633.539674]  [<ffffffff811e2b23>] xfs_buf_read_map+0x13/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.539681]  [<ffffffff81233e8d>] xfs_trans_read_buf_map+0x1ad/0x2d0
Nov 20 19:35:16 thoregon kernel: [ 2633.539688]  [<ffffffff812116b6>] xfs_da_read_buf+0xb6/0x180
Nov 20 19:35:16 thoregon kernel: [ 2633.539697]  [<ffffffff8121724c>] xfs_dir2_leaf_lookup_int+0x5c/0x2c0
Nov 20 19:35:16 thoregon kernel: [ 2633.539706]  [<ffffffff81217583>] xfs_dir2_leaf_removename+0x33/0x330
Nov 20 19:35:16 thoregon kernel: [ 2633.539714]  [<ffffffff8120777b>] ? xfs_bmap_last_offset+0x8b/0xc0
Nov 20 19:35:16 thoregon kernel: [ 2633.539722]  [<ffffffff81213b94>] xfs_dir_removename+0x134/0x150
Nov 20 19:35:16 thoregon kernel: [ 2633.539730]  [<ffffffff811f5ce6>] xfs_remove+0x1f6/0x370
Nov 20 19:35:16 thoregon kernel: [ 2633.539738]  [<ffffffff816af4ac>] ? mutex_lock_nested+0x26c/0x330
Nov 20 19:35:16 thoregon kernel: [ 2633.539746]  [<ffffffff810ff48f>] ? vfs_unlink+0x4f/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.539754]  [<ffffffff811ec7f3>] xfs_vn_unlink+0x43/0x90
Nov 20 19:35:16 thoregon kernel: [ 2633.539763]  [<ffffffff810ff4cd>] vfs_unlink+0x8d/0x100
Nov 20 19:35:16 thoregon kernel: [ 2633.539771]  [<ffffffff810ff63a>] do_unlinkat+0xfa/0x1b0
Nov 20 19:35:16 thoregon kernel: [ 2633.539779]  [<ffffffff8108152d>] ? trace_hardirqs_on_caller+0xfd/0x190
Nov 20 19:35:16 thoregon kernel: [ 2633.539786]  [<ffffffff8129ca1e>] ? trace_hardirqs_on_thunk+0x3a/0x3f
Nov 20 19:35:16 thoregon kernel: [ 2633.539794]  [<ffffffff81101bb1>] sys_unlink+0x11/0x20
Nov 20 19:35:16 thoregon kernel: [ 2633.539802]  [<ffffffff816b3252>] system_call_fastpath+0x16/0x1b

[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
  2012-11-20 19:45                 ` Torsten Kaiser
@ 2012-11-20 20:27                   ` Dave Chinner
  -1 siblings, 0 replies; 31+ messages in thread
From: Dave Chinner @ 2012-11-20 20:27 UTC (permalink / raw)
  To: Torsten Kaiser; +Cc: xfs, Linux Kernel

On Tue, Nov 20, 2012 at 08:45:03PM +0100, Torsten Kaiser wrote:
> On Tue, Nov 20, 2012 at 12:53 AM, Dave Chinner <david@fromorbit.com> wrote:
> >        [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
> >        [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
> >        [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
> >        [<ffffffff810dba31>] vm_map_ram+0x271/0x770
> >        [<ffffffff811e1316>] _xfs_buf_map_pages+0x46/0xe0
> >        [<ffffffff811e222a>] xfs_buf_get_map+0x8a/0x130
> >        [<ffffffff81233ab9>] xfs_trans_get_buf_map+0xa9/0xd0
> >        [<ffffffff8121bced>] xfs_ialloc_inode_init+0xcd/0x1d0
> >
> > We shouldn't be mapping buffers there, there's a patch below to fix
> > this. It's probably the source of this report, even though I cannot
> > lockdep seems to be off with the fairies...
> 
> That patch seems to break my system.

You've got an IO problem, not an XFS problem. Everything is hung up
on MD.

 INFO: task kswapd0:725 blocked for more than 120 seconds.
 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
 kswapd0         D 0000000000000001     0   725      2 0x00000000
  ffff8803280d13f8 0000000000000046 ffff880329a0ab80 ffff8803280d1fd8
  ffff8803280d1fd8 ffff8803280d1fd8 ffff880046b7c880 ffff880329a0ab80
  ffff8803280d1408 ffff8803278dbbd0 ffff8803278db800 00000000ffffffff
 Call Trace:
  [<ffffffff816b1224>] schedule+0x24/0x60
  [<ffffffff814f9dad>] md_super_wait+0x4d/0x80
  [<ffffffff81500753>] bitmap_unplug+0x173/0x180
  [<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
  [<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
  [<ffffffff816b15c3>] io_schedule_timeout+0x83/0xf0
  [<ffffffff810b0e1d>] mempool_alloc+0x12d/0x160
  [<ffffffff811263da>] bvec_alloc_bs+0xda/0x100
  [<ffffffff811264ea>] bio_alloc_bioset+0xea/0x110
  [<ffffffff81126656>] bio_clone_bioset+0x16/0x40
  [<ffffffff814f471a>] bio_clone_mddev+0x1a/0x30
  [<ffffffff814edbb1>] make_request+0x551/0xde0
  [<ffffffff814f80bb>] md_make_request+0x21b/0x4d0
  [<ffffffff81276e52>] generic_make_request+0xc2/0x100
  [<ffffffff81276ef5>] submit_bio+0x65/0x110
  [<ffffffff811e07bf>] xfs_submit_ioend_bio.isra.21+0x2f/0x40
  [<ffffffff811e088e>] xfs_submit_ioend+0xbe/0x110
  [<ffffffff811e0c91>] xfs_vm_writepage+0x3b1/0x540
  [<ffffffff810bcd84>] shrink_page_list+0x564/0x890
  [<ffffffff810bd637>] shrink_inactive_list+0x1d7/0x310
  [<ffffffff810bdb9d>] shrink_lruvec+0x42d/0x530
  [<ffffffff810be323>] kswapd+0x683/0xa20
  [<ffffffff8105c246>] kthread+0xd6/0xe0
  [<ffffffff816b31ac>] ret_from_fork+0x7c/0xb0
 no locks held by kswapd0/725.

So kswapd is trying to clean pages, but it's blocked in an unplug
during IO submission. Probably one to report to the linux-raid list.

 INFO: task xfsaild/md4:1742 blocked for more than 120 seconds.
 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
 xfsaild/md4     D 0000000000000003     0  1742      2 0x00000000
  ffff88032438bb68 0000000000000046 ffff880329965700 ffff88032438bfd8
  ffff88032438bfd8 ffff88032438bfd8 ffff88032827e580 ffff880329965700
  ffff88032438bb78 ffff8803278dbbd0 ffff8803278db800 00000000ffffffff
 Call Trace:
  [<ffffffff816b1224>] schedule+0x24/0x60
  [<ffffffff814f9dad>] md_super_wait+0x4d/0x80
  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
  [<ffffffff81500753>] bitmap_unplug+0x173/0x180
  [<ffffffff81278c13>] ? blk_finish_plug+0x13/0x50
  [<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
  [<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
  [<ffffffff81278c13>] blk_finish_plug+0x13/0x50
  [<ffffffff811e296a>] __xfs_buf_delwri_submit+0x1ca/0x1e0
  [<ffffffff811e2ffb>] xfs_buf_delwri_submit_nowait+0x1b/0x20
  [<ffffffff81233066>] xfsaild+0x226/0x4c0
  [<ffffffff81065dfa>] ? finish_task_switch+0x3a/0x100
  [<ffffffff81232e40>] ? xfs_trans_ail_cursor_first+0xa0/0xa0
  [<ffffffff8105c246>] kthread+0xd6/0xe0
  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
  [<ffffffff816b31ac>] ret_from_fork+0x7c/0xb0
  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
 no locks held by xfsaild/md4/1742.

Same here - metadata writes are backed up waiting for MD to submit
IO. Everything else is stuck on thesei or MD, too...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: Hang in XFS reclaim on 3.7.0-rc3
@ 2012-11-20 20:27                   ` Dave Chinner
  0 siblings, 0 replies; 31+ messages in thread
From: Dave Chinner @ 2012-11-20 20:27 UTC (permalink / raw)
  To: Torsten Kaiser; +Cc: Linux Kernel, xfs

On Tue, Nov 20, 2012 at 08:45:03PM +0100, Torsten Kaiser wrote:
> On Tue, Nov 20, 2012 at 12:53 AM, Dave Chinner <david@fromorbit.com> wrote:
> >        [<ffffffff8108137e>] mark_held_locks+0x7e/0x130
> >        [<ffffffff81081a63>] lockdep_trace_alloc+0x63/0xc0
> >        [<ffffffff810e9dd5>] kmem_cache_alloc+0x35/0xe0
> >        [<ffffffff810dba31>] vm_map_ram+0x271/0x770
> >        [<ffffffff811e1316>] _xfs_buf_map_pages+0x46/0xe0
> >        [<ffffffff811e222a>] xfs_buf_get_map+0x8a/0x130
> >        [<ffffffff81233ab9>] xfs_trans_get_buf_map+0xa9/0xd0
> >        [<ffffffff8121bced>] xfs_ialloc_inode_init+0xcd/0x1d0
> >
> > We shouldn't be mapping buffers there, there's a patch below to fix
> > this. It's probably the source of this report, even though I cannot
> > lockdep seems to be off with the fairies...
> 
> That patch seems to break my system.

You've got an IO problem, not an XFS problem. Everything is hung up
on MD.

 INFO: task kswapd0:725 blocked for more than 120 seconds.
 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
 kswapd0         D 0000000000000001     0   725      2 0x00000000
  ffff8803280d13f8 0000000000000046 ffff880329a0ab80 ffff8803280d1fd8
  ffff8803280d1fd8 ffff8803280d1fd8 ffff880046b7c880 ffff880329a0ab80
  ffff8803280d1408 ffff8803278dbbd0 ffff8803278db800 00000000ffffffff
 Call Trace:
  [<ffffffff816b1224>] schedule+0x24/0x60
  [<ffffffff814f9dad>] md_super_wait+0x4d/0x80
  [<ffffffff81500753>] bitmap_unplug+0x173/0x180
  [<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
  [<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
  [<ffffffff816b15c3>] io_schedule_timeout+0x83/0xf0
  [<ffffffff810b0e1d>] mempool_alloc+0x12d/0x160
  [<ffffffff811263da>] bvec_alloc_bs+0xda/0x100
  [<ffffffff811264ea>] bio_alloc_bioset+0xea/0x110
  [<ffffffff81126656>] bio_clone_bioset+0x16/0x40
  [<ffffffff814f471a>] bio_clone_mddev+0x1a/0x30
  [<ffffffff814edbb1>] make_request+0x551/0xde0
  [<ffffffff814f80bb>] md_make_request+0x21b/0x4d0
  [<ffffffff81276e52>] generic_make_request+0xc2/0x100
  [<ffffffff81276ef5>] submit_bio+0x65/0x110
  [<ffffffff811e07bf>] xfs_submit_ioend_bio.isra.21+0x2f/0x40
  [<ffffffff811e088e>] xfs_submit_ioend+0xbe/0x110
  [<ffffffff811e0c91>] xfs_vm_writepage+0x3b1/0x540
  [<ffffffff810bcd84>] shrink_page_list+0x564/0x890
  [<ffffffff810bd637>] shrink_inactive_list+0x1d7/0x310
  [<ffffffff810bdb9d>] shrink_lruvec+0x42d/0x530
  [<ffffffff810be323>] kswapd+0x683/0xa20
  [<ffffffff8105c246>] kthread+0xd6/0xe0
  [<ffffffff816b31ac>] ret_from_fork+0x7c/0xb0
 no locks held by kswapd0/725.

So kswapd is trying to clean pages, but it's blocked in an unplug
during IO submission. Probably one to report to the linux-raid list.

 INFO: task xfsaild/md4:1742 blocked for more than 120 seconds.
 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
 xfsaild/md4     D 0000000000000003     0  1742      2 0x00000000
  ffff88032438bb68 0000000000000046 ffff880329965700 ffff88032438bfd8
  ffff88032438bfd8 ffff88032438bfd8 ffff88032827e580 ffff880329965700
  ffff88032438bb78 ffff8803278dbbd0 ffff8803278db800 00000000ffffffff
 Call Trace:
  [<ffffffff816b1224>] schedule+0x24/0x60
  [<ffffffff814f9dad>] md_super_wait+0x4d/0x80
  [<ffffffff8105ca30>] ? __init_waitqueue_head+0x60/0x60
  [<ffffffff81500753>] bitmap_unplug+0x173/0x180
  [<ffffffff81278c13>] ? blk_finish_plug+0x13/0x50
  [<ffffffff814e8eb8>] raid1_unplug+0x98/0x110
  [<ffffffff81278a6d>] blk_flush_plug_list+0xad/0x240
  [<ffffffff81278c13>] blk_finish_plug+0x13/0x50
  [<ffffffff811e296a>] __xfs_buf_delwri_submit+0x1ca/0x1e0
  [<ffffffff811e2ffb>] xfs_buf_delwri_submit_nowait+0x1b/0x20
  [<ffffffff81233066>] xfsaild+0x226/0x4c0
  [<ffffffff81065dfa>] ? finish_task_switch+0x3a/0x100
  [<ffffffff81232e40>] ? xfs_trans_ail_cursor_first+0xa0/0xa0
  [<ffffffff8105c246>] kthread+0xd6/0xe0
  [<ffffffff816b246b>] ? _raw_spin_unlock_irq+0x2b/0x50
  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
  [<ffffffff816b31ac>] ret_from_fork+0x7c/0xb0
  [<ffffffff8105c170>] ? flush_kthread_worker+0xe0/0xe0
 no locks held by xfsaild/md4/1742.

Same here - metadata writes are backed up waiting for MD to submit
IO. Everything else is stuck on thesei or MD, too...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2012-11-20 20:27 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-10-29 20:03 Hang in XFS reclaim on 3.7.0-rc3 Torsten Kaiser
2012-10-29 20:03 ` Torsten Kaiser
2012-10-29 22:26 ` Dave Chinner
2012-10-29 22:26   ` Dave Chinner
2012-10-29 22:41   ` Dave Chinner
2012-10-29 22:41     ` Dave Chinner
2012-10-29 22:41     ` Dave Chinner
2012-10-30 20:37   ` Torsten Kaiser
2012-10-30 20:37     ` Torsten Kaiser
2012-10-30 20:46     ` Christoph Hellwig
2012-10-30 20:46       ` Christoph Hellwig
2012-11-18 10:24     ` Torsten Kaiser
2012-11-18 10:24       ` Torsten Kaiser
2012-11-18 15:29       ` Torsten Kaiser
2012-11-18 15:29         ` Torsten Kaiser
2012-11-18 23:51         ` Dave Chinner
2012-11-18 23:51           ` Dave Chinner
2012-11-19  6:50           ` Torsten Kaiser
2012-11-19  6:50             ` Torsten Kaiser
2012-11-19 23:53             ` Dave Chinner
2012-11-19 23:53               ` Dave Chinner
2012-11-20  7:09               ` Torsten Kaiser
2012-11-20  7:09                 ` Torsten Kaiser
2012-11-20 19:45               ` Torsten Kaiser
2012-11-20 19:45                 ` Torsten Kaiser
2012-11-20 20:27                 ` Dave Chinner
2012-11-20 20:27                   ` Dave Chinner
2012-11-01 21:30   ` Ben Myers
2012-11-01 21:30     ` Ben Myers
2012-11-01 22:32     ` Dave Chinner
2012-11-01 22:32       ` Dave Chinner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.