All of lore.kernel.org
 help / color / mirror / Atom feed
* 3.15.0-rc2: RECLAIM_FS-safe -> RECLAIM_FS-unsafe lock order detected
@ 2014-04-25 10:21 Christian Kujau
  2014-04-28  0:50 ` Dave Chinner
  0 siblings, 1 reply; 2+ messages in thread
From: Christian Kujau @ 2014-04-25 10:21 UTC (permalink / raw)
  To: xfs

Hi,

I haven't run vanilla for a while, so this is pretty much a copy of
what I reported[0] back with 3.14-rc2, but now with 3.15-rc2. Full
dmesg & .config can be found here:

   http://nerdbynature.de/bits/3.15-rc2/


======================================================
[ INFO: RECLAIM_FS-safe -> RECLAIM_FS-unsafe lock order detected ]
3.15.0-rc2 #1 Not tainted
------------------------------------------------------
rm/8288 [HC0[0]:SC0[0]:HE1:SE1] is trying to acquire:
 (&mm->mmap_sem){++++++}, at: [<c00b16ac>] might_fault+0x58/0xa0

and this task is already holding:
 (&xfs_dir_ilock_class){++++-.}, at: [<c020f790>] 
xfs_ilock_data_map_shared+0x28/0x70
which would create a new lock dependency:
 (&xfs_dir_ilock_class){++++-.} -> (&mm->mmap_sem){++++++}

but this new dependency connects a RECLAIM_FS-irq-safe lock:
 (&xfs_dir_ilock_class){++++-.}
... which became RECLAIM_FS-irq-safe at:
  [<c00658a4>] lock_acquire+0x54/0x70
  [<c00600f0>] down_write_nested+0x50/0xa0
  [<c01cef9c>] xfs_reclaim_inode+0x108/0x318
  [<c01cf360>] xfs_reclaim_inodes_ag+0x1b4/0x360
  [<c01cfea4>] xfs_reclaim_inodes_nr+0x38/0x4c
  [<c00d2d00>] super_cache_scan+0x150/0x158
  [<c00a2110>] shrink_slab_node+0x138/0x228
  [<c00a2874>] shrink_slab+0x124/0x13c
  [<c00a53f4>] kswapd+0x3f8/0x884
  [<c004e654>] kthread+0xbc/0xd0
  [<c0010b7c>] ret_from_kernel_thread+0x5c/0x64
to a RECLAIM_FS-irq-unsafe lock:
 (&mm->mmap_sem){++++++}
... which became RECLAIM_FS-irq-unsafe at:
...  [<c0065f94>] lockdep_trace_alloc+0x84/0x104
  [<c00cb630>] kmem_cache_alloc+0x30/0x148
  [<c00ba038>] mmap_region+0x2fc/0x578
  [<c00ba5a0>] do_mmap_pgoff+0x2ec/0x378
  [<c00aacf8>] vm_mmap_pgoff+0x58/0x94
  [<c012124c>] load_elf_binary+0x488/0x11f4
  [<c00d5b48>] search_binary_handler+0x98/0x1f4
  [<c00d6abc>] do_execve+0x484/0x580
  [<c000425c>] try_to_run_init_process+0x18/0x58
  [<c0004a5c>] kernel_init+0xac/0x110
  [<c0010b7c>] ret_from_kernel_thread+0x5c/0x64

other info that might help us debug this:

 Possible interrupt unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&mm->mmap_sem);
                               local_irq_disable();
                               lock(&xfs_dir_ilock_class);
                               lock(&mm->mmap_sem);
  <Interrupt>
    lock(&xfs_dir_ilock_class);

 *** DEADLOCK ***
2 locks held by rm/8288:
 #0:  (&type->i_mutex_dir_key#5){+.+.+.}, at: [<c00e230c>] iterate_dir+0x3c/0xd0
 #1:  (&xfs_dir_ilock_class){++++-.}, at: [<c020f790>] xfs_ilock_data_map_shared+0x28/0x70

[....]


Christian.

[0] http://oss.sgi.com/archives/xfs/2014-02/msg00362.html
-- 
BOFH excuse #354:

Chewing gum on /dev/sd3c

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: 3.15.0-rc2: RECLAIM_FS-safe -> RECLAIM_FS-unsafe lock order detected
  2014-04-25 10:21 3.15.0-rc2: RECLAIM_FS-safe -> RECLAIM_FS-unsafe lock order detected Christian Kujau
@ 2014-04-28  0:50 ` Dave Chinner
  0 siblings, 0 replies; 2+ messages in thread
From: Dave Chinner @ 2014-04-28  0:50 UTC (permalink / raw)
  To: Christian Kujau; +Cc: xfs

On Fri, Apr 25, 2014 at 03:21:16AM -0700, Christian Kujau wrote:
> Hi,
> 
> I haven't run vanilla for a while, so this is pretty much a copy of
> what I reported[0] back with 3.14-rc2, but now with 3.15-rc2. Full
> dmesg & .config can be found here:
> 
>    http://nerdbynature.de/bits/3.15-rc2/
> 
> 
> ======================================================
> [ INFO: RECLAIM_FS-safe -> RECLAIM_FS-unsafe lock order detected ]
> 3.15.0-rc2 #1 Not tainted
> ------------------------------------------------------
> rm/8288 [HC0[0]:SC0[0]:HE1:SE1] is trying to acquire:
>  (&mm->mmap_sem){++++++}, at: [<c00b16ac>] might_fault+0x58/0xa0
> 
> and this task is already holding:
>  (&xfs_dir_ilock_class){++++-.}, at: [<c020f790>] 
> xfs_ilock_data_map_shared+0x28/0x70
> which would create a new lock dependency:
>  (&xfs_dir_ilock_class){++++-.} -> (&mm->mmap_sem){++++++}
> 
> but this new dependency connects a RECLAIM_FS-irq-safe lock:
>  (&xfs_dir_ilock_class){++++-.}
> ... which became RECLAIM_FS-irq-safe at:
>   [<c00658a4>] lock_acquire+0x54/0x70
>   [<c00600f0>] down_write_nested+0x50/0xa0
>   [<c01cef9c>] xfs_reclaim_inode+0x108/0x318
>   [<c01cf360>] xfs_reclaim_inodes_ag+0x1b4/0x360
>   [<c01cfea4>] xfs_reclaim_inodes_nr+0x38/0x4c
>   [<c00d2d00>] super_cache_scan+0x150/0x158
>   [<c00a2110>] shrink_slab_node+0x138/0x228
>   [<c00a2874>] shrink_slab+0x124/0x13c
>   [<c00a53f4>] kswapd+0x3f8/0x884
>   [<c004e654>] kthread+0xbc/0xd0
>   [<c0010b7c>] ret_from_kernel_thread+0x5c/0x64
> to a RECLAIM_FS-irq-unsafe lock:
>  (&mm->mmap_sem){++++++}
> ... which became RECLAIM_FS-irq-unsafe at:
> ...  [<c0065f94>] lockdep_trace_alloc+0x84/0x104
>   [<c00cb630>] kmem_cache_alloc+0x30/0x148
>   [<c00ba038>] mmap_region+0x2fc/0x578
>   [<c00ba5a0>] do_mmap_pgoff+0x2ec/0x378
>   [<c00aacf8>] vm_mmap_pgoff+0x58/0x94
>   [<c012124c>] load_elf_binary+0x488/0x11f4
>   [<c00d5b48>] search_binary_handler+0x98/0x1f4
>   [<c00d6abc>] do_execve+0x484/0x580
>   [<c000425c>] try_to_run_init_process+0x18/0x58
>   [<c0004a5c>] kernel_init+0xac/0x110
>   [<c0010b7c>] ret_from_kernel_thread+0x5c/0x64
> 
> other info that might help us debug this:
>
>  Possible interrupt unsafe locking scenario:
> 
>        CPU0                    CPU1
>        ----                    ----
>   lock(&mm->mmap_sem);
>                                local_irq_disable();
>                                lock(&xfs_dir_ilock_class);
>                                lock(&mm->mmap_sem);
>   <Interrupt>
>     lock(&xfs_dir_ilock_class);

Known false positive. Directory inodes can't be mmap()d or execv()d,
nor can referenced inodes be reclaimed.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2014-04-28  0:50 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-04-25 10:21 3.15.0-rc2: RECLAIM_FS-safe -> RECLAIM_FS-unsafe lock order detected Christian Kujau
2014-04-28  0:50 ` Dave Chinner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.