linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* linux-next: JFFS2 deadlock
@ 2013-02-26 11:54 Mark Jackson
  2013-02-26 23:17 ` Stephen Rothwell
  0 siblings, 1 reply; 4+ messages in thread
From: Mark Jackson @ 2013-02-26 11:54 UTC (permalink / raw)
  To: linux-next; +Cc: linux-mtd, David Woodhouse, lkml

Just tested the current next-20130226 on a custom AM335X board, and I received the JFFS2 deadlock shown below.

Regards
Mark JACKSON
---
[    3.250349] jffs2: notice: (1) jffs2_build_xattr_subsystem: complete building xattr subsystem, 0 of xdatum (0 unchecked, 0 orphan) and 0 of xref (0 dead, 0 orphan) found.
[    3.268364] VFS: Mounted root (jffs2 filesystem) readonly on device 31:6.
[    3.277233] devtmpfs: mounted
[    3.280982] Freeing init memory: 332K
[    3.706697]
[    3.708306] ======================================================
[    3.714804] [ INFO: possible circular locking dependency detected ]
[    3.721398] 3.8.0-next-20130226-dirty #10 Not tainted
[    3.726708] -------------------------------------------------------
[    3.733297] rcS/686 is trying to acquire lock:
[    3.737969]  (&mm->mmap_sem){++++++}, at: [<c00f0af4>] might_fault+0x3c/0x90
[    3.745437]
[    3.745437] but task is already holding lock:
[    3.751569]  (&f->sem){+.+.+.}, at: [<c023d128>] jffs2_readdir+0x44/0x1a8
[    3.758748]
[    3.758748] which lock already depends on the new lock.
[    3.758748]
[    3.767348]
[    3.767348] the existing dependency chain (in reverse order) is:
[    3.775215]
-> #1 (&f->sem){+.+.+.}:
[    3.779184]        [<c0092df0>] lock_acquire+0x9c/0x104
[    3.784701]        [<c04b76e4>] mutex_lock_nested+0x3c/0x334
[    3.790666]        [<c023d950>] jffs2_readpage+0x20/0x44
[    3.796261]        [<c00d9d38>] __do_page_cache_readahead+0x2a0/0x2cc
[    3.803050]        [<c00da004>] ra_submit+0x28/0x30
[    3.808187]        [<c00d179c>] filemap_fault+0x304/0x458
[    3.813884]        [<c00f0c58>] __do_fault+0x6c/0x490
[    3.819203]        [<c00f3c5c>] handle_pte_fault+0xb0/0x6f0
[    3.825071]        [<c00f433c>] handle_mm_fault+0xa0/0xd4
[    3.830755]        [<c04bbdcc>] do_page_fault+0x2a0/0x3d4
[    3.836449]        [<c000845c>] do_DataAbort+0x30/0x9c
[    3.841861]        [<c04ba2a4>] __dabt_svc+0x44/0x80
[    3.847089]        [<c0289c34>] __clear_user_std+0x1c/0x64
[    3.852877]
-> #0 (&mm->mmap_sem){++++++}:
[    3.857393]        [<c00927ec>] __lock_acquire+0x1d70/0x1de0
[    3.863353]        [<c0092df0>] lock_acquire+0x9c/0x104
[    3.868855]        [<c00f0b18>] might_fault+0x60/0x90
[    3.874174]        [<c011bc3c>] filldir+0x5c/0x158
[    3.879230]        [<c023d1c0>] jffs2_readdir+0xdc/0x1a8
[    3.884823]        [<c011becc>] vfs_readdir+0x98/0xb4
[    3.890144]        [<c011bfcc>] sys_getdents+0x74/0xd0
[    3.895554]        [<c0013820>] ret_fast_syscall+0x0/0x3c
[    3.901251]
[    3.901251] other info that might help us debug this:
[    3.901251]
[    3.909668]  Possible unsafe locking scenario:
[    3.909668]
[    3.915892]        CPU0                    CPU1
[    3.920652]        ----                    ----
[    3.925411]   lock(&f->sem);
[    3.928451]                                lock(&mm->mmap_sem);
[    3.934688]                                lock(&f->sem);
[    3.940376]   lock(&mm->mmap_sem);
[    3.943965]
[    3.943965]  *** DEADLOCK ***
[    3.943965]
[    3.950196] 2 locks held by rcS/686:
[    3.953952]  #0:  (&type->i_mutex_dir_key){+.+.+.}, at: [<c011be90>] vfs_readdir+0x5c/0xb4
[    3.962686]  #1:  (&f->sem){+.+.+.}, at: [<c023d128>] jffs2_readdir+0x44/0x1a8
[    3.970320]
[    3.970320] stack backtrace:
[    3.974930] [<c001b158>] (unwind_backtrace+0x0/0xf0) from [<c008f29c>] (print_circular_bug+0x1d0/0x2dc)
[    3.984815] [<c008f29c>] (print_circular_bug+0x1d0/0x2dc) from [<c00927ec>] (__lock_acquire+0x1d70/0x1de0)
[    3.994975] [<c00927ec>] (__lock_acquire+0x1d70/0x1de0) from [<c0092df0>] (lock_acquire+0x9c/0x104)
[    4.004494] [<c0092df0>] (lock_acquire+0x9c/0x104) from [<c00f0b18>] (might_fault+0x60/0x90)
[    4.013376] [<c00f0b18>] (might_fault+0x60/0x90) from [<c011bc3c>] (filldir+0x5c/0x158)
[    4.021802] [<c011bc3c>] (filldir+0x5c/0x158) from [<c023d1c0>] (jffs2_readdir+0xdc/0x1a8)
[    4.030502] [<c023d1c0>] (jffs2_readdir+0xdc/0x1a8) from [<c011becc>] (vfs_readdir+0x98/0xb4)
[    4.039477] [<c011becc>] (vfs_readdir+0x98/0xb4) from [<c011bfcc>] (sys_getdents+0x74/0xd0)
[    4.048270] [<c011bfcc>] (sys_getdents+0x74/0xd0) from [<c0013820>] (ret_fast_syscall+0x0/0x3c)

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: linux-next: JFFS2 deadlock
  2013-02-26 11:54 linux-next: JFFS2 deadlock Mark Jackson
@ 2013-02-26 23:17 ` Stephen Rothwell
  2013-02-27  0:27   ` Al Viro
  2013-02-27 15:19   ` Mark Jackson
  0 siblings, 2 replies; 4+ messages in thread
From: Stephen Rothwell @ 2013-02-26 23:17 UTC (permalink / raw)
  To: Mark Jackson; +Cc: linux-next, linux-mtd, David Woodhouse, lkml, Al Viro

[-- Attachment #1: Type: text/plain, Size: 4747 bytes --]

Hi Mark,

On Tue, 26 Feb 2013 11:54:56 +0000 Mark Jackson <mpfj-list@mimc.co.uk> wrote:
>
> Just tested the current next-20130226 on a custom AM335X board, and I received the JFFS2 deadlock shown below.

Is this new today?  is it reproducible? Does if ail for Linus' tree?

Al, could this be something to do with the new stuff in the vfs tree?

> Regards
> Mark JACKSON
> ---
> [    3.250349] jffs2: notice: (1) jffs2_build_xattr_subsystem: complete building xattr subsystem, 0 of xdatum (0 unchecked, 0 orphan) and 0 of xref (0 dead, 0 orphan) found.
> [    3.268364] VFS: Mounted root (jffs2 filesystem) readonly on device 31:6.
> [    3.277233] devtmpfs: mounted
> [    3.280982] Freeing init memory: 332K
> [    3.706697]
> [    3.708306] ======================================================
> [    3.714804] [ INFO: possible circular locking dependency detected ]
> [    3.721398] 3.8.0-next-20130226-dirty #10 Not tainted
> [    3.726708] -------------------------------------------------------
> [    3.733297] rcS/686 is trying to acquire lock:
> [    3.737969]  (&mm->mmap_sem){++++++}, at: [<c00f0af4>] might_fault+0x3c/0x90
> [    3.745437]
> [    3.745437] but task is already holding lock:
> [    3.751569]  (&f->sem){+.+.+.}, at: [<c023d128>] jffs2_readdir+0x44/0x1a8
> [    3.758748]
> [    3.758748] which lock already depends on the new lock.
> [    3.758748]
> [    3.767348]
> [    3.767348] the existing dependency chain (in reverse order) is:
> [    3.775215]
> -> #1 (&f->sem){+.+.+.}:
> [    3.779184]        [<c0092df0>] lock_acquire+0x9c/0x104
> [    3.784701]        [<c04b76e4>] mutex_lock_nested+0x3c/0x334
> [    3.790666]        [<c023d950>] jffs2_readpage+0x20/0x44
> [    3.796261]        [<c00d9d38>] __do_page_cache_readahead+0x2a0/0x2cc
> [    3.803050]        [<c00da004>] ra_submit+0x28/0x30
> [    3.808187]        [<c00d179c>] filemap_fault+0x304/0x458
> [    3.813884]        [<c00f0c58>] __do_fault+0x6c/0x490
> [    3.819203]        [<c00f3c5c>] handle_pte_fault+0xb0/0x6f0
> [    3.825071]        [<c00f433c>] handle_mm_fault+0xa0/0xd4
> [    3.830755]        [<c04bbdcc>] do_page_fault+0x2a0/0x3d4
> [    3.836449]        [<c000845c>] do_DataAbort+0x30/0x9c
> [    3.841861]        [<c04ba2a4>] __dabt_svc+0x44/0x80
> [    3.847089]        [<c0289c34>] __clear_user_std+0x1c/0x64
> [    3.852877]
> -> #0 (&mm->mmap_sem){++++++}:
> [    3.857393]        [<c00927ec>] __lock_acquire+0x1d70/0x1de0
> [    3.863353]        [<c0092df0>] lock_acquire+0x9c/0x104
> [    3.868855]        [<c00f0b18>] might_fault+0x60/0x90
> [    3.874174]        [<c011bc3c>] filldir+0x5c/0x158
> [    3.879230]        [<c023d1c0>] jffs2_readdir+0xdc/0x1a8
> [    3.884823]        [<c011becc>] vfs_readdir+0x98/0xb4
> [    3.890144]        [<c011bfcc>] sys_getdents+0x74/0xd0
> [    3.895554]        [<c0013820>] ret_fast_syscall+0x0/0x3c
> [    3.901251]
> [    3.901251] other info that might help us debug this:
> [    3.901251]
> [    3.909668]  Possible unsafe locking scenario:
> [    3.909668]
> [    3.915892]        CPU0                    CPU1
> [    3.920652]        ----                    ----
> [    3.925411]   lock(&f->sem);
> [    3.928451]                                lock(&mm->mmap_sem);
> [    3.934688]                                lock(&f->sem);
> [    3.940376]   lock(&mm->mmap_sem);
> [    3.943965]
> [    3.943965]  *** DEADLOCK ***
> [    3.943965]
> [    3.950196] 2 locks held by rcS/686:
> [    3.953952]  #0:  (&type->i_mutex_dir_key){+.+.+.}, at: [<c011be90>] vfs_readdir+0x5c/0xb4
> [    3.962686]  #1:  (&f->sem){+.+.+.}, at: [<c023d128>] jffs2_readdir+0x44/0x1a8
> [    3.970320]
> [    3.970320] stack backtrace:
> [    3.974930] [<c001b158>] (unwind_backtrace+0x0/0xf0) from [<c008f29c>] (print_circular_bug+0x1d0/0x2dc)
> [    3.984815] [<c008f29c>] (print_circular_bug+0x1d0/0x2dc) from [<c00927ec>] (__lock_acquire+0x1d70/0x1de0)
> [    3.994975] [<c00927ec>] (__lock_acquire+0x1d70/0x1de0) from [<c0092df0>] (lock_acquire+0x9c/0x104)
> [    4.004494] [<c0092df0>] (lock_acquire+0x9c/0x104) from [<c00f0b18>] (might_fault+0x60/0x90)
> [    4.013376] [<c00f0b18>] (might_fault+0x60/0x90) from [<c011bc3c>] (filldir+0x5c/0x158)
> [    4.021802] [<c011bc3c>] (filldir+0x5c/0x158) from [<c023d1c0>] (jffs2_readdir+0xdc/0x1a8)
> [    4.030502] [<c023d1c0>] (jffs2_readdir+0xdc/0x1a8) from [<c011becc>] (vfs_readdir+0x98/0xb4)
> [    4.039477] [<c011becc>] (vfs_readdir+0x98/0xb4) from [<c011bfcc>] (sys_getdents+0x74/0xd0)
> [    4.048270] [<c011bfcc>] (sys_getdents+0x74/0xd0) from [<c0013820>] (ret_fast_syscall+0x0/0x3c)

-- 
Cheers,
Stephen Rothwell                    sfr@canb.auug.org.au

[-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: linux-next: JFFS2 deadlock
  2013-02-26 23:17 ` Stephen Rothwell
@ 2013-02-27  0:27   ` Al Viro
  2013-02-27 15:19   ` Mark Jackson
  1 sibling, 0 replies; 4+ messages in thread
From: Al Viro @ 2013-02-27  0:27 UTC (permalink / raw)
  To: Stephen Rothwell
  Cc: Mark Jackson, linux-next, linux-mtd, David Woodhouse, lkml

On Wed, Feb 27, 2013 at 10:17:04AM +1100, Stephen Rothwell wrote:
> Hi Mark,
> 
> On Tue, 26 Feb 2013 11:54:56 +0000 Mark Jackson <mpfj-list@mimc.co.uk> wrote:
> >
> > Just tested the current next-20130226 on a custom AM335X board, and I received the JFFS2 deadlock shown below.
> 
> Is this new today?  is it reproducible? Does if ail for Linus' tree?
> 
> Al, could this be something to do with the new stuff in the vfs tree?

Very unlikely.  jffs2_readdir() does, indeed, grab ->sem on the inode in
question and then calls the callback (which might fault, grabbing ->mmap_sem).
Had been doing that all along.  And if the userland area we are doing
getdents(2) into had been mmapped from jffs2 file, jffs2_readpage() would
be called, which would grab ->sem on the inode of file mmaped in there.
Again, that had been going on all along.  Unlike the situation with ->i_mutex,
this one is probably a false positive - ->sem on directories nests outside of
->mmap_sem, ->sem on non-directories - inside.  But that false positive
shouldn't be something new; hell, both paths are present in 2.6.12 with
the lock orders as in the current tree (except that ->sem used to be
a semaphore instead of a mutex).

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: linux-next: JFFS2 deadlock
  2013-02-26 23:17 ` Stephen Rothwell
  2013-02-27  0:27   ` Al Viro
@ 2013-02-27 15:19   ` Mark Jackson
  1 sibling, 0 replies; 4+ messages in thread
From: Mark Jackson @ 2013-02-27 15:19 UTC (permalink / raw)
  To: Stephen Rothwell; +Cc: linux-next, linux-mtd, David Woodhouse, lkml, Al Viro

On 26/02/13 23:17, Stephen Rothwell wrote:
> Hi Mark,
> 
> On Tue, 26 Feb 2013 11:54:56 +0000 Mark Jackson <mpfj-list@mimc.co.uk> wrote:
>>
>> Just tested the current next-20130226 on a custom AM335X board, and I received the JFFS2 deadlock shown below.
> 
> Is this new today?  is it reproducible? Does if ail for Linus' tree?

Almost identical on Linus' tree:-

[    3.685186]
[    3.686803] ======================================================
[    3.693333] [ INFO: possible circular locking dependency detected ]
[    3.699962] 3.8.0-09426-g986876f-dirty #133 Not tainted
[    3.705482] -------------------------------------------------------
[    3.712105] rcS/682 is trying to acquire lock:
[    3.716801]  (&mm->mmap_sem){++++++}, at: [<c00f1258>] might_fault+0x3c/0x90
[    3.724307]
[    3.724307] but task is already holding lock:
[    3.730470]  (&f->sem){+.+.+.}, at: [<c023cb30>] jffs2_readdir+0x44/0x1a8
[    3.737686]
[    3.737686] which lock already depends on the new lock.
[    3.737686]
[    3.746331]
[    3.746331] the existing dependency chain (in reverse order) is:
[    3.754239]
-> #1 (&f->sem){+.+.+.}:
[    3.758229]        [<c0093614>] lock_acquire+0x9c/0x104
[    3.763775]        [<c04b3b4c>] mutex_lock_nested+0x3c/0x334
[    3.769769]        [<c023d358>] jffs2_readpage+0x20/0x44
[    3.775392]        [<c00da510>] __do_page_cache_readahead+0x2a0/0x2cc
[    3.782217]        [<c00da7dc>] ra_submit+0x28/0x30
[    3.787380]        [<c00d2010>] filemap_fault+0x304/0x458
[    3.793094]        [<c00f13bc>] __do_fault+0x6c/0x490
[    3.798439]        [<c00f43a4>] handle_pte_fault+0xb0/0x6f0
[    3.804337]        [<c00f4a84>] handle_mm_fault+0xa0/0xd4
[    3.810050]        [<c04b80e8>] do_page_fault+0x2a0/0x3d4
[    3.815773]        [<c000845c>] do_DataAbort+0x30/0x9c
[    3.821212]        [<c04b65e4>] __dabt_svc+0x44/0x80
[    3.826467]        [<c0288d10>] __clear_user_std+0x1c/0x64
[    3.832285]
-> #0 (&mm->mmap_sem){++++++}:
[    3.836824]        [<c0093010>] __lock_acquire+0x1d70/0x1de0
[    3.842815]        [<c0093614>] lock_acquire+0x9c/0x104
[    3.848345]        [<c00f127c>] might_fault+0x60/0x90
[    3.853690]        [<c011c504>] filldir+0x5c/0x158
[    3.858762]        [<c023cbc8>] jffs2_readdir+0xdc/0x1a8
[    3.864384]        [<c011c794>] vfs_readdir+0x98/0xb4
[    3.869729]        [<c011c894>] sys_getdents+0x74/0xd0
[    3.875166]        [<c0013800>] ret_fast_syscall+0x0/0x3c
[    3.880890]
[    3.880890] other info that might help us debug this:
[    3.880890]
[    3.889350]  Possible unsafe locking scenario:
[    3.889350]
[    3.895604]        CPU0                    CPU1
[    3.900388]        ----                    ----
[    3.905171]   lock(&f->sem);
[    3.908227]                                lock(&mm->mmap_sem);
[    3.914494]                                lock(&f->sem);
[    3.920210]   lock(&mm->mmap_sem);
[    3.923816]
[    3.923816]  *** DEADLOCK ***
[    3.923816]
[    3.930077] 2 locks held by rcS/682:
[    3.933851]  #0:  (&type->i_mutex_dir_key){+.+.+.}, at: [<c011c758>] vfs_readdir+0x5c/0xb4
[    3.942625]  #1:  (&f->sem){+.+.+.}, at: [<c023cb30>] jffs2_readdir+0x44/0x1a8
[    3.950298]
[    3.950298] stack backtrace:
[    3.954930] [<c001b11c>] (unwind_backtrace+0x0/0xf0) from [<c008fac0>] (print_circular_bug+0x1d0/0x2dc)
[    3.964870] [<c008fac0>] (print_circular_bug+0x1d0/0x2dc) from [<c0093010>] (__lock_acquire+0x1d70/0x1de0)
[    3.975081] [<c0093010>] (__lock_acquire+0x1d70/0x1de0) from [<c0093614>] (lock_acquire+0x9c/0x104)
[    3.984650] [<c0093614>] (lock_acquire+0x9c/0x104) from [<c00f127c>] (might_fault+0x60/0x90)
[    3.993573] [<c00f127c>] (might_fault+0x60/0x90) from [<c011c504>] (filldir+0x5c/0x158)
[    4.002039] [<c011c504>] (filldir+0x5c/0x158) from [<c023cbc8>] (jffs2_readdir+0xdc/0x1a8)
[    4.010781] [<c023cbc8>] (jffs2_readdir+0xdc/0x1a8) from [<c011c794>] (vfs_readdir+0x98/0xb4)
[    4.019797] [<c011c794>] (vfs_readdir+0x98/0xb4) from [<c011c894>] (sys_getdents+0x74/0xd0)
[    4.028629] [<c011c894>] (sys_getdents+0x74/0xd0) from [<c0013800>] (ret_fast_syscall+0x0/0x3c)


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2013-02-27 15:19 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-02-26 11:54 linux-next: JFFS2 deadlock Mark Jackson
2013-02-26 23:17 ` Stephen Rothwell
2013-02-27  0:27   ` Al Viro
2013-02-27 15:19   ` Mark Jackson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).