All of lore.kernel.org
 help / color / mirror / Atom feed
* xfs i_lock vs mmap_sem lockdep trace.
@ 2014-03-29 22:31 ` Dave Jones
  0 siblings, 0 replies; 16+ messages in thread
From: Dave Jones @ 2014-03-29 22:31 UTC (permalink / raw)
  To: Linux Kernel; +Cc: xfs

Not sure if I've reported this already (it looks familiar, though I've not managed
to find it in my sent mail folder).  This is rc8 + a diff to fix the stack usage reports
I was seeing (diff at http://paste.fedoraproject.org/89854/13210913/raw)

 ======================================================
 [ INFO: possible circular locking dependency detected ]
 3.14.0-rc8+ #153 Not tainted
 -------------------------------------------------------
 git/32710 is trying to acquire lock:
  (&(&ip->i_lock)->mr_lock){++++.+}, at: [<ffffffffc03bd782>] xfs_ilock+0x122/0x250 [xfs]
 
but task is already holding lock:
  (&mm->mmap_sem){++++++}, at: [<ffffffffae7b816a>] __do_page_fault+0x14a/0x610

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&mm->mmap_sem){++++++}:
        [<ffffffffae0d1951>] lock_acquire+0x91/0x1c0
        [<ffffffffae1a66dc>] might_fault+0x8c/0xb0
        [<ffffffffae2016e1>] filldir+0x91/0x120
        [<ffffffffc0359622>] xfs_dir2_leaf_getdents+0x332/0x450 [xfs]
        [<ffffffffc035993e>] xfs_readdir+0x1fe/0x260 [xfs]
        [<ffffffffc035c2ab>] xfs_file_readdir+0x2b/0x40 [xfs]
        [<ffffffffae201528>] iterate_dir+0xa8/0xe0
        [<ffffffffae2019ea>] SyS_getdents+0x9a/0x130
        [<ffffffffae7bda64>] tracesys+0xdd/0xe2
 
-> #0 (&(&ip->i_lock)->mr_lock){++++.+}:
        [<ffffffffae0d0dee>] __lock_acquire+0x181e/0x1bd0
        [<ffffffffae0d1951>] lock_acquire+0x91/0x1c0
        [<ffffffffae0ca852>] down_read_nested+0x52/0xa0
        [<ffffffffc03bd782>] xfs_ilock+0x122/0x250 [xfs]
        [<ffffffffc03bd8cf>] xfs_ilock_data_map_shared+0x1f/0x40 [xfs]
        [<ffffffffc034dca7>] __xfs_get_blocks+0xc7/0x840 [xfs]
        [<ffffffffc034e431>] xfs_get_blocks+0x11/0x20 [xfs]
        [<ffffffffae2347a8>] do_mpage_readpage+0x4a8/0x6f0
        [<ffffffffae234adb>] mpage_readpages+0xeb/0x160
        [<ffffffffc034b62d>] xfs_vm_readpages+0x1d/0x20 [xfs]
        [<ffffffffae188a6a>] __do_page_cache_readahead+0x2ea/0x390
        [<ffffffffae1891e1>] ra_submit+0x21/0x30
        [<ffffffffae17c085>] filemap_fault+0x395/0x420
        [<ffffffffae1a684f>] __do_fault+0x7f/0x570
        [<ffffffffae1aa6e7>] handle_mm_fault+0x217/0xc40
        [<ffffffffae7b81ce>] __do_page_fault+0x1ae/0x610
        [<ffffffffae7b864e>] do_page_fault+0x1e/0x70
        [<ffffffffae7b4fd2>] page_fault+0x22/0x30
 
other info that might help us debug this:

  Possible unsafe locking scenario:

        CPU0                    CPU1
        ----                    ----
   lock(&mm->mmap_sem);
                                lock(&(&ip->i_lock)->mr_lock);
                                lock(&mm->mmap_sem);
   lock(&(&ip->i_lock)->mr_lock);
 
 *** DEADLOCK ***

1 lock held by git/32710:
 #0:  (&mm->mmap_sem){++++++}, at: [<ffffffffae7b816a>] __do_page_fault+0x14a/0x610

stack backtrace:
CPU: 1 PID: 32710 Comm: git Not tainted 3.14.0-rc8+ #153
 ffffffffaf69e650 000000005bc802c5 ffff88006bc9f768 ffffffffae7a8da2
 ffffffffaf69e650 ffff88006bc9f7a8 ffffffffae7a4e66 ffff88006bc9f800
 ffff880069c3dc30 0000000000000000 ffff880069c3dbf8 ffff880069c3dc30
Call Trace:
 [<ffffffffae7a8da2>] dump_stack+0x4e/0x7a
 [<ffffffffae7a4e66>] print_circular_bug+0x201/0x20f
 [<ffffffffae0d0dee>] __lock_acquire+0x181e/0x1bd0
 [<ffffffffae0d1951>] lock_acquire+0x91/0x1c0
 [<ffffffffc03bd782>] ? xfs_ilock+0x122/0x250 [xfs]
 [<ffffffffc03bd8cf>] ? xfs_ilock_data_map_shared+0x1f/0x40 [xfs]
 [<ffffffffae0ca852>] down_read_nested+0x52/0xa0
 [<ffffffffc03bd782>] ? xfs_ilock+0x122/0x250 [xfs]
 [<ffffffffc03bd782>] xfs_ilock+0x122/0x250 [xfs]
 [<ffffffffc03bd8cf>] xfs_ilock_data_map_shared+0x1f/0x40 [xfs]
 [<ffffffffc034dca7>] __xfs_get_blocks+0xc7/0x840 [xfs]
 [<ffffffffae18481c>] ? __alloc_pages_nodemask+0x1ac/0xbb0
 [<ffffffffc034e431>] xfs_get_blocks+0x11/0x20 [xfs]
 [<ffffffffae2347a8>] do_mpage_readpage+0x4a8/0x6f0
 [<ffffffffc034e420>] ? __xfs_get_blocks+0x840/0x840 [xfs]
 [<ffffffffae0ae61d>] ? get_parent_ip+0xd/0x50
 [<ffffffffae7b8e0b>] ? preempt_count_sub+0x6b/0xf0
 [<ffffffffae18acc5>] ? __lru_cache_add+0x65/0xc0
 [<ffffffffae234adb>] mpage_readpages+0xeb/0x160
 [<ffffffffc034e420>] ? __xfs_get_blocks+0x840/0x840 [xfs]
 [<ffffffffc034e420>] ? __xfs_get_blocks+0x840/0x840 [xfs]
 [<ffffffffae1cb256>] ? alloc_pages_current+0x106/0x1f0
 [<ffffffffc034b62d>] xfs_vm_readpages+0x1d/0x20 [xfs]
 [<ffffffffae188a6a>] __do_page_cache_readahead+0x2ea/0x390
 [<ffffffffae1888a0>] ? __do_page_cache_readahead+0x120/0x390
 [<ffffffffae1891e1>] ra_submit+0x21/0x30
 [<ffffffffae17c085>] filemap_fault+0x395/0x420
 [<ffffffffae1a684f>] __do_fault+0x7f/0x570
 [<ffffffffae1aa6e7>] handle_mm_fault+0x217/0xc40
 [<ffffffffae0cbd27>] ? __lock_is_held+0x57/0x80
 [<ffffffffae7b81ce>] __do_page_fault+0x1ae/0x610
 [<ffffffffae0cbdae>] ? put_lock_stats.isra.28+0xe/0x30
 [<ffffffffae0cc706>] ? lock_release_holdtime.part.29+0xe6/0x160
 [<ffffffffae0ae61d>] ? get_parent_ip+0xd/0x50
 [<ffffffffae17880f>] ? context_tracking_user_exit+0x5f/0x190
 [<ffffffffae7b864e>] do_page_fault+0x1e/0x70
 [<ffffffffae7b4fd2>] page_fault+0x22/0x30


^ permalink raw reply	[flat|nested] 16+ messages in thread

* xfs i_lock vs mmap_sem lockdep trace.
@ 2014-03-29 22:31 ` Dave Jones
  0 siblings, 0 replies; 16+ messages in thread
From: Dave Jones @ 2014-03-29 22:31 UTC (permalink / raw)
  To: Linux Kernel; +Cc: xfs

Not sure if I've reported this already (it looks familiar, though I've not managed
to find it in my sent mail folder).  This is rc8 + a diff to fix the stack usage reports
I was seeing (diff at http://paste.fedoraproject.org/89854/13210913/raw)

 ======================================================
 [ INFO: possible circular locking dependency detected ]
 3.14.0-rc8+ #153 Not tainted
 -------------------------------------------------------
 git/32710 is trying to acquire lock:
  (&(&ip->i_lock)->mr_lock){++++.+}, at: [<ffffffffc03bd782>] xfs_ilock+0x122/0x250 [xfs]
 
but task is already holding lock:
  (&mm->mmap_sem){++++++}, at: [<ffffffffae7b816a>] __do_page_fault+0x14a/0x610

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&mm->mmap_sem){++++++}:
        [<ffffffffae0d1951>] lock_acquire+0x91/0x1c0
        [<ffffffffae1a66dc>] might_fault+0x8c/0xb0
        [<ffffffffae2016e1>] filldir+0x91/0x120
        [<ffffffffc0359622>] xfs_dir2_leaf_getdents+0x332/0x450 [xfs]
        [<ffffffffc035993e>] xfs_readdir+0x1fe/0x260 [xfs]
        [<ffffffffc035c2ab>] xfs_file_readdir+0x2b/0x40 [xfs]
        [<ffffffffae201528>] iterate_dir+0xa8/0xe0
        [<ffffffffae2019ea>] SyS_getdents+0x9a/0x130
        [<ffffffffae7bda64>] tracesys+0xdd/0xe2
 
-> #0 (&(&ip->i_lock)->mr_lock){++++.+}:
        [<ffffffffae0d0dee>] __lock_acquire+0x181e/0x1bd0
        [<ffffffffae0d1951>] lock_acquire+0x91/0x1c0
        [<ffffffffae0ca852>] down_read_nested+0x52/0xa0
        [<ffffffffc03bd782>] xfs_ilock+0x122/0x250 [xfs]
        [<ffffffffc03bd8cf>] xfs_ilock_data_map_shared+0x1f/0x40 [xfs]
        [<ffffffffc034dca7>] __xfs_get_blocks+0xc7/0x840 [xfs]
        [<ffffffffc034e431>] xfs_get_blocks+0x11/0x20 [xfs]
        [<ffffffffae2347a8>] do_mpage_readpage+0x4a8/0x6f0
        [<ffffffffae234adb>] mpage_readpages+0xeb/0x160
        [<ffffffffc034b62d>] xfs_vm_readpages+0x1d/0x20 [xfs]
        [<ffffffffae188a6a>] __do_page_cache_readahead+0x2ea/0x390
        [<ffffffffae1891e1>] ra_submit+0x21/0x30
        [<ffffffffae17c085>] filemap_fault+0x395/0x420
        [<ffffffffae1a684f>] __do_fault+0x7f/0x570
        [<ffffffffae1aa6e7>] handle_mm_fault+0x217/0xc40
        [<ffffffffae7b81ce>] __do_page_fault+0x1ae/0x610
        [<ffffffffae7b864e>] do_page_fault+0x1e/0x70
        [<ffffffffae7b4fd2>] page_fault+0x22/0x30
 
other info that might help us debug this:

  Possible unsafe locking scenario:

        CPU0                    CPU1
        ----                    ----
   lock(&mm->mmap_sem);
                                lock(&(&ip->i_lock)->mr_lock);
                                lock(&mm->mmap_sem);
   lock(&(&ip->i_lock)->mr_lock);
 
 *** DEADLOCK ***

1 lock held by git/32710:
 #0:  (&mm->mmap_sem){++++++}, at: [<ffffffffae7b816a>] __do_page_fault+0x14a/0x610

stack backtrace:
CPU: 1 PID: 32710 Comm: git Not tainted 3.14.0-rc8+ #153
 ffffffffaf69e650 000000005bc802c5 ffff88006bc9f768 ffffffffae7a8da2
 ffffffffaf69e650 ffff88006bc9f7a8 ffffffffae7a4e66 ffff88006bc9f800
 ffff880069c3dc30 0000000000000000 ffff880069c3dbf8 ffff880069c3dc30
Call Trace:
 [<ffffffffae7a8da2>] dump_stack+0x4e/0x7a
 [<ffffffffae7a4e66>] print_circular_bug+0x201/0x20f
 [<ffffffffae0d0dee>] __lock_acquire+0x181e/0x1bd0
 [<ffffffffae0d1951>] lock_acquire+0x91/0x1c0
 [<ffffffffc03bd782>] ? xfs_ilock+0x122/0x250 [xfs]
 [<ffffffffc03bd8cf>] ? xfs_ilock_data_map_shared+0x1f/0x40 [xfs]
 [<ffffffffae0ca852>] down_read_nested+0x52/0xa0
 [<ffffffffc03bd782>] ? xfs_ilock+0x122/0x250 [xfs]
 [<ffffffffc03bd782>] xfs_ilock+0x122/0x250 [xfs]
 [<ffffffffc03bd8cf>] xfs_ilock_data_map_shared+0x1f/0x40 [xfs]
 [<ffffffffc034dca7>] __xfs_get_blocks+0xc7/0x840 [xfs]
 [<ffffffffae18481c>] ? __alloc_pages_nodemask+0x1ac/0xbb0
 [<ffffffffc034e431>] xfs_get_blocks+0x11/0x20 [xfs]
 [<ffffffffae2347a8>] do_mpage_readpage+0x4a8/0x6f0
 [<ffffffffc034e420>] ? __xfs_get_blocks+0x840/0x840 [xfs]
 [<ffffffffae0ae61d>] ? get_parent_ip+0xd/0x50
 [<ffffffffae7b8e0b>] ? preempt_count_sub+0x6b/0xf0
 [<ffffffffae18acc5>] ? __lru_cache_add+0x65/0xc0
 [<ffffffffae234adb>] mpage_readpages+0xeb/0x160
 [<ffffffffc034e420>] ? __xfs_get_blocks+0x840/0x840 [xfs]
 [<ffffffffc034e420>] ? __xfs_get_blocks+0x840/0x840 [xfs]
 [<ffffffffae1cb256>] ? alloc_pages_current+0x106/0x1f0
 [<ffffffffc034b62d>] xfs_vm_readpages+0x1d/0x20 [xfs]
 [<ffffffffae188a6a>] __do_page_cache_readahead+0x2ea/0x390
 [<ffffffffae1888a0>] ? __do_page_cache_readahead+0x120/0x390
 [<ffffffffae1891e1>] ra_submit+0x21/0x30
 [<ffffffffae17c085>] filemap_fault+0x395/0x420
 [<ffffffffae1a684f>] __do_fault+0x7f/0x570
 [<ffffffffae1aa6e7>] handle_mm_fault+0x217/0xc40
 [<ffffffffae0cbd27>] ? __lock_is_held+0x57/0x80
 [<ffffffffae7b81ce>] __do_page_fault+0x1ae/0x610
 [<ffffffffae0cbdae>] ? put_lock_stats.isra.28+0xe/0x30
 [<ffffffffae0cc706>] ? lock_release_holdtime.part.29+0xe6/0x160
 [<ffffffffae0ae61d>] ? get_parent_ip+0xd/0x50
 [<ffffffffae17880f>] ? context_tracking_user_exit+0x5f/0x190
 [<ffffffffae7b864e>] do_page_fault+0x1e/0x70
 [<ffffffffae7b4fd2>] page_fault+0x22/0x30

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs i_lock vs mmap_sem lockdep trace.
  2014-03-29 22:31 ` Dave Jones
@ 2014-03-30 23:43   ` Dave Chinner
  -1 siblings, 0 replies; 16+ messages in thread
From: Dave Chinner @ 2014-03-30 23:43 UTC (permalink / raw)
  To: Dave Jones, Linux Kernel, xfs

On Sat, Mar 29, 2014 at 06:31:09PM -0400, Dave Jones wrote:
> Not sure if I've reported this already (it looks familiar, though I've not managed
> to find it in my sent mail folder).  This is rc8 + a diff to fix the stack usage reports
> I was seeing (diff at http://paste.fedoraproject.org/89854/13210913/raw)
> 
>  ======================================================
>  [ INFO: possible circular locking dependency detected ]
>  3.14.0-rc8+ #153 Not tainted
>  -------------------------------------------------------
>  git/32710 is trying to acquire lock:
>   (&(&ip->i_lock)->mr_lock){++++.+}, at: [<ffffffffc03bd782>] xfs_ilock+0x122/0x250 [xfs]
>  
> but task is already holding lock:
>   (&mm->mmap_sem){++++++}, at: [<ffffffffae7b816a>] __do_page_fault+0x14a/0x610
> 
> which lock already depends on the new lock.

filldir on a directory inode vs page fault on regular file. Known
issue, definitely a false positive. We have to change locking
algorithms to avoid such deficiencies of lockdep (a case of "lockdep
considered harmful", perhaps?) so it's not something I'm about to
rush...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs i_lock vs mmap_sem lockdep trace.
@ 2014-03-30 23:43   ` Dave Chinner
  0 siblings, 0 replies; 16+ messages in thread
From: Dave Chinner @ 2014-03-30 23:43 UTC (permalink / raw)
  To: Dave Jones, Linux Kernel, xfs

On Sat, Mar 29, 2014 at 06:31:09PM -0400, Dave Jones wrote:
> Not sure if I've reported this already (it looks familiar, though I've not managed
> to find it in my sent mail folder).  This is rc8 + a diff to fix the stack usage reports
> I was seeing (diff at http://paste.fedoraproject.org/89854/13210913/raw)
> 
>  ======================================================
>  [ INFO: possible circular locking dependency detected ]
>  3.14.0-rc8+ #153 Not tainted
>  -------------------------------------------------------
>  git/32710 is trying to acquire lock:
>   (&(&ip->i_lock)->mr_lock){++++.+}, at: [<ffffffffc03bd782>] xfs_ilock+0x122/0x250 [xfs]
>  
> but task is already holding lock:
>   (&mm->mmap_sem){++++++}, at: [<ffffffffae7b816a>] __do_page_fault+0x14a/0x610
> 
> which lock already depends on the new lock.

filldir on a directory inode vs page fault on regular file. Known
issue, definitely a false positive. We have to change locking
algorithms to avoid such deficiencies of lockdep (a case of "lockdep
considered harmful", perhaps?) so it's not something I'm about to
rush...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs i_lock vs mmap_sem lockdep trace.
  2014-03-30 23:43   ` Dave Chinner
@ 2014-03-30 23:57     ` Al Viro
  -1 siblings, 0 replies; 16+ messages in thread
From: Al Viro @ 2014-03-30 23:57 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Dave Jones, Linux Kernel, xfs

On Mon, Mar 31, 2014 at 10:43:35AM +1100, Dave Chinner wrote:
> filldir on a directory inode vs page fault on regular file. Known
> issue, definitely a false positive. We have to change locking
> algorithms to avoid such deficiencies of lockdep (a case of "lockdep
> considered harmful", perhaps?) so it's not something I'm about to
> rush...

Give i_lock on directories a separate class, as it's been done for i_mutex...

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs i_lock vs mmap_sem lockdep trace.
@ 2014-03-30 23:57     ` Al Viro
  0 siblings, 0 replies; 16+ messages in thread
From: Al Viro @ 2014-03-30 23:57 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Dave Jones, Linux Kernel, xfs

On Mon, Mar 31, 2014 at 10:43:35AM +1100, Dave Chinner wrote:
> filldir on a directory inode vs page fault on regular file. Known
> issue, definitely a false positive. We have to change locking
> algorithms to avoid such deficiencies of lockdep (a case of "lockdep
> considered harmful", perhaps?) so it's not something I'm about to
> rush...

Give i_lock on directories a separate class, as it's been done for i_mutex...

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs i_lock vs mmap_sem lockdep trace.
  2014-03-30 23:43   ` Dave Chinner
@ 2014-03-31  0:20     ` Dave Jones
  -1 siblings, 0 replies; 16+ messages in thread
From: Dave Jones @ 2014-03-31  0:20 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Linux Kernel, xfs

On Mon, Mar 31, 2014 at 10:43:35AM +1100, Dave Chinner wrote:
 > On Sat, Mar 29, 2014 at 06:31:09PM -0400, Dave Jones wrote:
 > > Not sure if I've reported this already (it looks familiar, though I've not managed
 > > to find it in my sent mail folder).  This is rc8 + a diff to fix the stack usage reports
 > > I was seeing (diff at http://paste.fedoraproject.org/89854/13210913/raw)
 > > 
 > >  ======================================================
 > >  [ INFO: possible circular locking dependency detected ]
 > >  3.14.0-rc8+ #153 Not tainted
 > >  -------------------------------------------------------
 > >  git/32710 is trying to acquire lock:
 > >   (&(&ip->i_lock)->mr_lock){++++.+}, at: [<ffffffffc03bd782>] xfs_ilock+0x122/0x250 [xfs]
 > >  
 > > but task is already holding lock:
 > >   (&mm->mmap_sem){++++++}, at: [<ffffffffae7b816a>] __do_page_fault+0x14a/0x610
 > > 
 > > which lock already depends on the new lock.
 > 
 > filldir on a directory inode vs page fault on regular file. Known
 > issue, definitely a false positive.

ah yeah, thought it looked familiar. I think I reported this last summer.

 > We have to change locking
 > algorithms to avoid such deficiencies of lockdep (a case of "lockdep
 > considered harmful", perhaps?) so it's not something I'm about to
 > rush...

Bummer, as it makes lockdep useless on my test box using xfs because it
disables itself after hitting this very quickly.
(I re-enabled it a couple days ago wondering why I'd left it turned off,
 chances are it was because of this)

	Dave


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs i_lock vs mmap_sem lockdep trace.
@ 2014-03-31  0:20     ` Dave Jones
  0 siblings, 0 replies; 16+ messages in thread
From: Dave Jones @ 2014-03-31  0:20 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Linux Kernel, xfs

On Mon, Mar 31, 2014 at 10:43:35AM +1100, Dave Chinner wrote:
 > On Sat, Mar 29, 2014 at 06:31:09PM -0400, Dave Jones wrote:
 > > Not sure if I've reported this already (it looks familiar, though I've not managed
 > > to find it in my sent mail folder).  This is rc8 + a diff to fix the stack usage reports
 > > I was seeing (diff at http://paste.fedoraproject.org/89854/13210913/raw)
 > > 
 > >  ======================================================
 > >  [ INFO: possible circular locking dependency detected ]
 > >  3.14.0-rc8+ #153 Not tainted
 > >  -------------------------------------------------------
 > >  git/32710 is trying to acquire lock:
 > >   (&(&ip->i_lock)->mr_lock){++++.+}, at: [<ffffffffc03bd782>] xfs_ilock+0x122/0x250 [xfs]
 > >  
 > > but task is already holding lock:
 > >   (&mm->mmap_sem){++++++}, at: [<ffffffffae7b816a>] __do_page_fault+0x14a/0x610
 > > 
 > > which lock already depends on the new lock.
 > 
 > filldir on a directory inode vs page fault on regular file. Known
 > issue, definitely a false positive.

ah yeah, thought it looked familiar. I think I reported this last summer.

 > We have to change locking
 > algorithms to avoid such deficiencies of lockdep (a case of "lockdep
 > considered harmful", perhaps?) so it's not something I'm about to
 > rush...

Bummer, as it makes lockdep useless on my test box using xfs because it
disables itself after hitting this very quickly.
(I re-enabled it a couple days ago wondering why I'd left it turned off,
 chances are it was because of this)

	Dave

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs i_lock vs mmap_sem lockdep trace.
  2014-03-30 23:57     ` Al Viro
@ 2014-03-31  0:40       ` Dave Chinner
  -1 siblings, 0 replies; 16+ messages in thread
From: Dave Chinner @ 2014-03-31  0:40 UTC (permalink / raw)
  To: Al Viro; +Cc: Dave Jones, Linux Kernel, xfs

On Mon, Mar 31, 2014 at 12:57:17AM +0100, Al Viro wrote:
> On Mon, Mar 31, 2014 at 10:43:35AM +1100, Dave Chinner wrote:
> > filldir on a directory inode vs page fault on regular file. Known
> > issue, definitely a false positive. We have to change locking
> > algorithms to avoid such deficiencies of lockdep (a case of "lockdep
> > considered harmful", perhaps?) so it's not something I'm about to
> > rush...
> 
> Give i_lock on directories a separate class, as it's been done for i_mutex...

Already done that. Commit:

93a8614 xfs: fix directory inode iolock lockdep false positive

This just changes where the false positives come from. This
insanity, for example, where shmem instantiates an inode in the page
fault path and so triggers selinux related lockdep fun:

http://oss.sgi.com/archives/xfs/2014-02/msg00618.html

and this with reclaim state contexts:

http://oss.sgi.com/archives/xfs/2014-03/msg00145.html

I even hacked a patch to move the inode classes to per-fstype classes,
and that just pushed the false positive somewhere else. It's just
another horrible game of whack-a-mole, caused by XFS doing something
different. The first possible fix:

http://oss.sgi.com/archives/xfs/2014-03/msg00146.html

is a bit of a big hammer approach, so the approach I'm looking at is
the "don't cache mappings in readdir" solution noted here:

http://oss.sgi.com/archives/xfs/2014-03/msg00163.html

Note that the problem that the additional locking added in 3.13
resolved can not be triggered by anything using the VFS for
directory access. The issues is that one of SGI's plug-ins (CXFS)
can access directories without going through the VFS and so the
internal XFS locking needs to serialise readdir vs directory
modification safely....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs i_lock vs mmap_sem lockdep trace.
@ 2014-03-31  0:40       ` Dave Chinner
  0 siblings, 0 replies; 16+ messages in thread
From: Dave Chinner @ 2014-03-31  0:40 UTC (permalink / raw)
  To: Al Viro; +Cc: Dave Jones, Linux Kernel, xfs

On Mon, Mar 31, 2014 at 12:57:17AM +0100, Al Viro wrote:
> On Mon, Mar 31, 2014 at 10:43:35AM +1100, Dave Chinner wrote:
> > filldir on a directory inode vs page fault on regular file. Known
> > issue, definitely a false positive. We have to change locking
> > algorithms to avoid such deficiencies of lockdep (a case of "lockdep
> > considered harmful", perhaps?) so it's not something I'm about to
> > rush...
> 
> Give i_lock on directories a separate class, as it's been done for i_mutex...

Already done that. Commit:

93a8614 xfs: fix directory inode iolock lockdep false positive

This just changes where the false positives come from. This
insanity, for example, where shmem instantiates an inode in the page
fault path and so triggers selinux related lockdep fun:

http://oss.sgi.com/archives/xfs/2014-02/msg00618.html

and this with reclaim state contexts:

http://oss.sgi.com/archives/xfs/2014-03/msg00145.html

I even hacked a patch to move the inode classes to per-fstype classes,
and that just pushed the false positive somewhere else. It's just
another horrible game of whack-a-mole, caused by XFS doing something
different. The first possible fix:

http://oss.sgi.com/archives/xfs/2014-03/msg00146.html

is a bit of a big hammer approach, so the approach I'm looking at is
the "don't cache mappings in readdir" solution noted here:

http://oss.sgi.com/archives/xfs/2014-03/msg00163.html

Note that the problem that the additional locking added in 3.13
resolved can not be triggered by anything using the VFS for
directory access. The issues is that one of SGI's plug-ins (CXFS)
can access directories without going through the VFS and so the
internal XFS locking needs to serialise readdir vs directory
modification safely....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs i_lock vs mmap_sem lockdep trace.
  2014-03-31  0:20     ` Dave Jones
@ 2014-03-31  0:42       ` Dave Chinner
  -1 siblings, 0 replies; 16+ messages in thread
From: Dave Chinner @ 2014-03-31  0:42 UTC (permalink / raw)
  To: Dave Jones, Linux Kernel, xfs

On Sun, Mar 30, 2014 at 08:20:30PM -0400, Dave Jones wrote:
> On Mon, Mar 31, 2014 at 10:43:35AM +1100, Dave Chinner wrote:
>  > On Sat, Mar 29, 2014 at 06:31:09PM -0400, Dave Jones wrote:
>  > > Not sure if I've reported this already (it looks familiar, though I've not managed
>  > > to find it in my sent mail folder).  This is rc8 + a diff to fix the stack usage reports
>  > > I was seeing (diff at http://paste.fedoraproject.org/89854/13210913/raw)
>  > > 
>  > >  ======================================================
>  > >  [ INFO: possible circular locking dependency detected ]
>  > >  3.14.0-rc8+ #153 Not tainted
>  > >  -------------------------------------------------------
>  > >  git/32710 is trying to acquire lock:
>  > >   (&(&ip->i_lock)->mr_lock){++++.+}, at: [<ffffffffc03bd782>] xfs_ilock+0x122/0x250 [xfs]
>  > >  
>  > > but task is already holding lock:
>  > >   (&mm->mmap_sem){++++++}, at: [<ffffffffae7b816a>] __do_page_fault+0x14a/0x610
>  > > 
>  > > which lock already depends on the new lock.
>  > 
>  > filldir on a directory inode vs page fault on regular file. Known
>  > issue, definitely a false positive.
> 
> ah yeah, thought it looked familiar. I think I reported this last summer.
> 
>  > We have to change locking
>  > algorithms to avoid such deficiencies of lockdep (a case of "lockdep
>  > considered harmful", perhaps?) so it's not something I'm about to
>  > rush...
> 
> Bummer, as it makes lockdep useless on my test box using xfs because it
> disables itself after hitting this very quickly.
> (I re-enabled it a couple days ago wondering why I'd left it turned off,
>  chances are it was because of this)

Yup, and seeing as SGI haven't shown any indication that they are
going to help fix it any time soon, it won't get fixed until I get
to it (hopefully) sometime soon.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs i_lock vs mmap_sem lockdep trace.
@ 2014-03-31  0:42       ` Dave Chinner
  0 siblings, 0 replies; 16+ messages in thread
From: Dave Chinner @ 2014-03-31  0:42 UTC (permalink / raw)
  To: Dave Jones, Linux Kernel, xfs

On Sun, Mar 30, 2014 at 08:20:30PM -0400, Dave Jones wrote:
> On Mon, Mar 31, 2014 at 10:43:35AM +1100, Dave Chinner wrote:
>  > On Sat, Mar 29, 2014 at 06:31:09PM -0400, Dave Jones wrote:
>  > > Not sure if I've reported this already (it looks familiar, though I've not managed
>  > > to find it in my sent mail folder).  This is rc8 + a diff to fix the stack usage reports
>  > > I was seeing (diff at http://paste.fedoraproject.org/89854/13210913/raw)
>  > > 
>  > >  ======================================================
>  > >  [ INFO: possible circular locking dependency detected ]
>  > >  3.14.0-rc8+ #153 Not tainted
>  > >  -------------------------------------------------------
>  > >  git/32710 is trying to acquire lock:
>  > >   (&(&ip->i_lock)->mr_lock){++++.+}, at: [<ffffffffc03bd782>] xfs_ilock+0x122/0x250 [xfs]
>  > >  
>  > > but task is already holding lock:
>  > >   (&mm->mmap_sem){++++++}, at: [<ffffffffae7b816a>] __do_page_fault+0x14a/0x610
>  > > 
>  > > which lock already depends on the new lock.
>  > 
>  > filldir on a directory inode vs page fault on regular file. Known
>  > issue, definitely a false positive.
> 
> ah yeah, thought it looked familiar. I think I reported this last summer.
> 
>  > We have to change locking
>  > algorithms to avoid such deficiencies of lockdep (a case of "lockdep
>  > considered harmful", perhaps?) so it's not something I'm about to
>  > rush...
> 
> Bummer, as it makes lockdep useless on my test box using xfs because it
> disables itself after hitting this very quickly.
> (I re-enabled it a couple days ago wondering why I'd left it turned off,
>  chances are it was because of this)

Yup, and seeing as SGI haven't shown any indication that they are
going to help fix it any time soon, it won't get fixed until I get
to it (hopefully) sometime soon.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs i_lock vs mmap_sem lockdep trace.
  2014-03-31  0:40       ` Dave Chinner
@ 2014-04-08 20:40         ` Sasha Levin
  -1 siblings, 0 replies; 16+ messages in thread
From: Sasha Levin @ 2014-04-08 20:40 UTC (permalink / raw)
  To: Dave Chinner, Al Viro; +Cc: Dave Jones, Linux Kernel, xfs

On 03/30/2014 08:40 PM, Dave Chinner wrote:
> On Mon, Mar 31, 2014 at 12:57:17AM +0100, Al Viro wrote:
>> > On Mon, Mar 31, 2014 at 10:43:35AM +1100, Dave Chinner wrote:
>>> > > filldir on a directory inode vs page fault on regular file. Known
>>> > > issue, definitely a false positive. We have to change locking
>>> > > algorithms to avoid such deficiencies of lockdep (a case of "lockdep
>>> > > considered harmful", perhaps?) so it's not something I'm about to
>>> > > rush...
>> > 
>> > Give i_lock on directories a separate class, as it's been done for i_mutex...
> Already done that. Commit:
> 
> 93a8614 xfs: fix directory inode iolock lockdep false positive

Hi Dave,

The commit above introduces a new lockdep issue for me:

[ 3162.917171] ======================================================
[ 3162.920402] [ INFO: RECLAIM_FS-READ-safe -> RECLAIM_FS-READ-unsafe lock order detected ]
[ 3162.934790] 3.14.0-next-20140408-sasha-00023-g06962b5 #384 Not tainted
[ 3162.934790] ------------------------------------------------------
[ 3162.934790] trinity-main/17183 [HC0[0]:SC0[0]:HE1:SE1] is trying to acquire:
[ 3162.934790] (&xfs_dir_ilock_class){++++..}, at: xfs_ilock (fs/xfs/mrlock.h:50 fs/xfs/xfs_inode.c:165)
[ 3162.934790]
[ 3162.934790] and this task is already holding:
[ 3162.934790] (sb_internal){.+.+.?}, at: xfs_trans_alloc (fs/xfs/xfs_trans.c:67)
[ 3162.934790] which would create a new lock dependency:
[ 3162.934790]  (sb_internal){.+.+.?} -> (&xfs_dir_ilock_class){++++..}
[ 3162.934790]
[ 3162.934790] but this new dependency connects a RECLAIM_FS-READ-irq-safe lock:
[ 3162.934790]  (sb_internal){.+.+.?}
... which became RECLAIM_FS-READ-irq-safe at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2821 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] __sb_start_write (fs/super.c:1187)
[ 3162.934790] xfs_trans_alloc (fs/xfs/xfs_trans.c:67)
[ 3162.934790] xfs_iomap_write_allocate (fs/xfs/xfs_iomap.c:689)
[ 3162.934790] xfs_map_blocks (fs/xfs/xfs_aops.c:340)
[ 3162.934790] xfs_vm_writepage (fs/xfs/xfs_aops.c:1081)
[ 3162.934790] shrink_page_list (mm/vmscan.c:503 mm/vmscan.c:1015)
[ 3162.934790] shrink_inactive_list (include/linux/spinlock.h:328 mm/vmscan.c:1503)
[ 3162.934790] shrink_lruvec (mm/vmscan.c:1830 mm/vmscan.c:2054)
[ 3162.934790] shrink_zone (mm/vmscan.c:2235)
[ 3162.934790] kswapd_shrink_zone (include/linux/nodemask.h:131 include/linux/nodemask.h:131 mm/vmscan.c:2894)
[ 3162.934790] kswapd (mm/vmscan.c:3080 mm/vmscan.c:3286)
[ 3162.934790] kthread (kernel/kthread.c:210)
[ 3162.934790] ret_from_fork (arch/x86/kernel/entry_64.S:555)
[ 3162.934790]
[ 3162.934790] to a RECLAIM_FS-READ-irq-unsafe lock:
[ 3162.934790]  (&mm->mmap_sem){++++++}
... which became RECLAIM_FS-READ-irq-unsafe at:
[ 3162.934790] ... mark_held_locks (kernel/locking/lockdep.c:2523)
[ 3162.934790] lockdep_trace_alloc (kernel/locking/lockdep.c:2745 kernel/locking/lockdep.c:2760)
[ 3162.934790] __alloc_pages_nodemask (mm/page_alloc.c:2721)
[ 3162.934790] alloc_pages_current (mm/mempolicy.c:2131)
[ 3162.934790] pte_alloc_one (arch/x86/mm/pgtable.c:28)
[ 3162.934790] __pte_alloc (mm/memory.c:557)
[ 3162.934790] move_page_tables (mm/mremap.c:209 (discriminator 1))
[ 3162.934790] shift_arg_pages (fs/exec.c:607)
[ 3162.934790] setup_arg_pages (fs/exec.c:715)
[ 3162.934790] load_elf_binary (fs/binfmt_elf.c:745)
[ 3162.934790] search_binary_handler (fs/exec.c:1392)
[ 3162.934790] do_execve_common.isra.19 (fs/exec.c:1429 fs/exec.c:1525)
[ 3162.934790] do_execve (fs/exec.c:1568)
[ 3162.934790] run_init_process (init/main.c:818)
[ 3162.934790] kernel_init (init/main.c:866)
[ 3162.934790] ret_from_fork (arch/x86/kernel/entry_64.S:555)
[ 3162.934790]
[ 3162.934790] other info that might help us debug this:
[ 3162.934790]
[ 3162.934790] Chain exists of:
sb_internal --> &xfs_dir_ilock_class --> &mm->mmap_sem

[ 3162.934790]  Possible interrupt unsafe locking scenario:
[ 3162.934790]
[ 3162.934790]        CPU0                    CPU1
[ 3162.934790]        ----                    ----
[ 3162.934790]   lock(&mm->mmap_sem);
[ 3162.934790]                                local_irq_disable();
[ 3162.934790]                                lock(sb_internal);
[ 3162.934790]                                lock(&xfs_dir_ilock_class);
[ 3162.934790]   <Interrupt>
[ 3162.934790]     lock(sb_internal);
[ 3162.934790]
[ 3162.934790]  *** DEADLOCK ***
[ 3162.934790]
[ 3162.934790] 3 locks held by trinity-main/17183:
[ 3162.934790] #0: (&type->i_mutex_dir_key#12){+.+.+.}, at: iterate_dir (fs/readdir.c:35)
[ 3162.934790] #1: (sb_writers#34){.+.+.+}, at: touch_atime (fs/inode.c:1550)
[ 3162.934790] #2: (sb_internal){.+.+.?}, at: xfs_trans_alloc (fs/xfs/xfs_trans.c:67)
[ 3162.934790]
the dependencies between RECLAIM_FS-READ-irq-safe lock and the holding lock:
[ 3162.934790] -> (sb_internal){.+.+.?} ops: 1021 {
[ 3162.934790]    HARDIRQ-ON-R at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2792 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] __sb_start_write (fs/super.c:1187)
[ 3162.934790] xfs_trans_alloc (fs/xfs/xfs_trans.c:67)
[ 3162.934790] xfs_create (fs/xfs/xfs_inode.c:1191)
[ 3162.934790] xfs_vn_mknod (fs/xfs/xfs_iops.c:161)
[ 3162.934790] xfs_vn_create (fs/xfs/xfs_iops.c:204)
[ 3162.934790] vfs_create (fs/namei.c:2505)
[ 3162.934790] do_last (fs/namei.c:2844 fs/namei.c:2929)
[ 3162.934790] path_openat (fs/namei.c:3181)
[ 3162.934790] do_filp_open (fs/namei.c:3230)
[ 3162.934790] do_sys_open (fs/open.c:1014)
[ 3162.934790] SyS_open (fs/open.c:1026)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]    SOFTIRQ-ON-R at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2804 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] __sb_start_write (fs/super.c:1187)
[ 3162.934790] xfs_trans_alloc (fs/xfs/xfs_trans.c:67)
[ 3162.934790] xfs_create (fs/xfs/xfs_inode.c:1191)
[ 3162.934790] xfs_vn_mknod (fs/xfs/xfs_iops.c:161)
[ 3162.934790] xfs_vn_create (fs/xfs/xfs_iops.c:204)
[ 3162.934790] vfs_create (fs/namei.c:2505)
[ 3162.934790] do_last (fs/namei.c:2844 fs/namei.c:2929)
[ 3162.934790] path_openat (fs/namei.c:3181)
[ 3162.934790] do_filp_open (fs/namei.c:3230)
[ 3162.934790] do_sys_open (fs/open.c:1014)
[ 3162.934790] SyS_open (fs/open.c:1026)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]    IN-RECLAIM_FS-R at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2821 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] __sb_start_write (fs/super.c:1187)
[ 3162.934790] xfs_trans_alloc (fs/xfs/xfs_trans.c:67)
[ 3162.934790] xfs_iomap_write_allocate (fs/xfs/xfs_iomap.c:689)
[ 3162.934790] xfs_map_blocks (fs/xfs/xfs_aops.c:340)
[ 3162.934790] xfs_vm_writepage (fs/xfs/xfs_aops.c:1081)
[ 3162.934790] shrink_page_list (mm/vmscan.c:503 mm/vmscan.c:1015)
[ 3162.934790] shrink_inactive_list (include/linux/spinlock.h:328 mm/vmscan.c:1503)
[ 3162.934790] shrink_lruvec (mm/vmscan.c:1830 mm/vmscan.c:2054)
[ 3162.934790] shrink_zone (mm/vmscan.c:2235)
[ 3162.934790] kswapd_shrink_zone (include/linux/nodemask.h:131 include/linux/nodemask.h:131 mm/vmscan.c:2894)
[ 3162.934790] kswapd (mm/vmscan.c:3080 mm/vmscan.c:3286)
[ 3162.934790] kthread (kernel/kthread.c:210)
[ 3162.934790] ret_from_fork (arch/x86/kernel/entry_64.S:555)
[ 3162.934790]    RECLAIM_FS-ON-R at:
[ 3162.934790] mark_held_locks (kernel/locking/lockdep.c:2523)
[ 3162.934790] lockdep_trace_alloc (kernel/locking/lockdep.c:2745 kernel/locking/lockdep.c:2760)
[ 3162.934790] kmem_cache_alloc (mm/slub.c:965 mm/slub.c:2403 mm/slub.c:2476 mm/slub.c:2481)
[ 3162.934790] kmem_zone_alloc (fs/xfs/kmem.c:130)
[ 3162.934790] _xfs_trans_alloc (fs/xfs/xfs_trans.c:87 (discriminator 2))
[ 3162.934790] xfs_trans_alloc (fs/xfs/xfs_trans.c:68)
[ 3162.934790] xfs_create (fs/xfs/xfs_inode.c:1191)
[ 3162.934790] xfs_vn_mknod (fs/xfs/xfs_iops.c:161)
[ 3162.934790] xfs_vn_create (fs/xfs/xfs_iops.c:204)
[ 3162.934790] vfs_create (fs/namei.c:2505)
[ 3162.934790] do_last (fs/namei.c:2844 fs/namei.c:2929)
[ 3162.934790] path_openat (fs/namei.c:3181)
[ 3162.934790] do_filp_open (fs/namei.c:3230)
[ 3162.934790] do_sys_open (fs/open.c:1014)
[ 3162.934790] SyS_open (fs/open.c:1026)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]    INITIAL USE at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:3142)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] __sb_start_write (fs/super.c:1187)
[ 3162.934790] xfs_trans_alloc (fs/xfs/xfs_trans.c:67)
[ 3162.934790] xfs_create (fs/xfs/xfs_inode.c:1191)
[ 3162.934790] xfs_vn_mknod (fs/xfs/xfs_iops.c:161)
[ 3162.934790] xfs_vn_create (fs/xfs/xfs_iops.c:204)
[ 3162.934790] vfs_create (fs/namei.c:2505)
[ 3162.934790] do_last (fs/namei.c:2844 fs/namei.c:2929)
[ 3162.934790] path_openat (fs/namei.c:3181)
[ 3162.934790] do_filp_open (fs/namei.c:3230)
[ 3162.934790] do_sys_open (fs/open.c:1014)
[ 3162.934790] SyS_open (fs/open.c:1026)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]  }
[ 3162.934790] ... key at: xfs_fs_type (??:?)
[ 3162.934790]  ... acquired at:
[ 3162.934790] check_irq_usage (kernel/locking/lockdep.c:1638)
[ 3162.934790] __lock_acquire (kernel/locking/lockdep_states.h:9 kernel/locking/lockdep.c:1844 kernel/locking/lockdep.c:1945 kernel/locking/lockdep.c:2131 kernel/locking/lockdep.c:3182)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] down_write_nested (arch/x86/include/asm/rwsem.h:130 kernel/locking/rwsem.c:143)
[ 3162.934790] xfs_ilock (fs/xfs/mrlock.h:50 fs/xfs/xfs_inode.c:165)
[ 3162.934790] xfs_vn_update_time (fs/xfs/xfs_iops.c:944)
[ 3162.934790] update_time (fs/inode.c:1502)
[ 3162.934790] touch_atime (fs/inode.c:1565)
[ 3162.934790] iterate_dir (include/linux/fs.h:1815 fs/readdir.c:43)
[ 3162.934790] SyS_getdents (fs/readdir.c:214 fs/readdir.c:193)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]
[ 3162.934790]
the dependencies between the lock to be acquired and RECLAIM_FS-READ-irq-unsafe lock:
[ 3162.934790]  -> (&mm->mmap_sem){++++++} ops: 217938150 {
[ 3162.934790]     HARDIRQ-ON-W at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2800 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] down_write (arch/x86/include/asm/rwsem.h:130 kernel/locking/rwsem.c:50)
[ 3162.934790] do_execve_common.isra.19 (fs/exec.c:259 fs/exec.c:374 fs/exec.c:1496)
[ 3162.934790] do_execve (fs/exec.c:1568)
[ 3162.934790] run_init_process (init/main.c:818)
[ 3162.934790] kernel_init (init/main.c:866)
[ 3162.934790] ret_from_fork (arch/x86/kernel/entry_64.S:555)
[ 3162.934790]     HARDIRQ-ON-R at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2792 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] might_fault (mm/memory.c:4214)
[ 3162.934790] __clear_user (arch/x86/lib/usercopy_64.c:18 arch/x86/lib/usercopy_64.c:21)
[ 3162.934790] clear_user (arch/x86/lib/usercopy_64.c:54)
[ 3162.934790] padzero (fs/binfmt_elf.c:122)
[ 3162.934790] load_elf_binary (fs/binfmt_elf.c:894 (discriminator 1))
[ 3162.934790] search_binary_handler (fs/exec.c:1392)
[ 3162.934790] do_execve_common.isra.19 (fs/exec.c:1429 fs/exec.c:1525)
[ 3162.934790] do_execve (fs/exec.c:1568)
[ 3162.934790] run_init_process (init/main.c:818)
[ 3162.934790] kernel_init (init/main.c:866)
[ 3162.934790] ret_from_fork (arch/x86/kernel/entry_64.S:555)
[ 3162.934790]     SOFTIRQ-ON-W at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2804 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] down_write (arch/x86/include/asm/rwsem.h:130 kernel/locking/rwsem.c:50)
[ 3162.934790] do_execve_common.isra.19 (fs/exec.c:259 fs/exec.c:374 fs/exec.c:1496)
[ 3162.934790] do_execve (fs/exec.c:1568)
[ 3162.934790] run_init_process (init/main.c:818)
[ 3162.934790] kernel_init (init/main.c:866)
[ 3162.934790] ret_from_fork (arch/x86/kernel/entry_64.S:555)
[ 3162.934790]     SOFTIRQ-ON-R at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2804 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] might_fault (mm/memory.c:4214)
[ 3162.934790] __clear_user (arch/x86/lib/usercopy_64.c:18 arch/x86/lib/usercopy_64.c:21)
[ 3162.934790] clear_user (arch/x86/lib/usercopy_64.c:54)
[ 3162.934790] padzero (fs/binfmt_elf.c:122)
[ 3162.934790] load_elf_binary (fs/binfmt_elf.c:894 (discriminator 1))
[ 3162.934790] search_binary_handler (fs/exec.c:1392)
[ 3162.934790] do_execve_common.isra.19 (fs/exec.c:1429 fs/exec.c:1525)
[ 3162.934790] do_execve (fs/exec.c:1568)
[ 3162.934790] run_init_process (init/main.c:818)
[ 3162.934790] kernel_init (init/main.c:866)
[ 3162.934790] ret_from_fork (arch/x86/kernel/entry_64.S:555)
[ 3162.934790]     RECLAIM_FS-ON-W at:
[ 3162.934790] mark_held_locks (kernel/locking/lockdep.c:2523)
[ 3162.934790] lockdep_trace_alloc (kernel/locking/lockdep.c:2745 kernel/locking/lockdep.c:2760)
[ 3162.934790] __alloc_pages_nodemask (mm/page_alloc.c:2721)
[ 3162.934790] alloc_pages_current (mm/mempolicy.c:2131)
[ 3162.934790] pte_alloc_one (arch/x86/mm/pgtable.c:28)
[ 3162.934790] __pte_alloc (mm/memory.c:557)
[ 3162.934790] move_page_tables (mm/mremap.c:209 (discriminator 1))
[ 3162.934790] shift_arg_pages (fs/exec.c:607)
[ 3162.934790] setup_arg_pages (fs/exec.c:715)
[ 3162.934790] load_elf_binary (fs/binfmt_elf.c:745)
[ 3162.934790] search_binary_handler (fs/exec.c:1392)
[ 3162.934790] do_execve_common.isra.19 (fs/exec.c:1429 fs/exec.c:1525)
[ 3162.934790] do_execve (fs/exec.c:1568)
[ 3162.934790] run_init_process (init/main.c:818)
[ 3162.934790] kernel_init (init/main.c:866)
[ 3162.934790] ret_from_fork (arch/x86/kernel/entry_64.S:555)
[ 3162.934790]     RECLAIM_FS-ON-R at:
[ 3162.934790] mark_held_locks (kernel/locking/lockdep.c:2523)
[ 3162.934790] lockdep_trace_alloc (kernel/locking/lockdep.c:2745 kernel/locking/lockdep.c:2760)
[ 3162.934790] __alloc_pages_nodemask (mm/page_alloc.c:2721)
[ 3162.934790] alloc_pages_current (mm/mempolicy.c:2131)
[ 3162.934790] __get_free_pages (mm/page_alloc.c:2803)
[ 3162.934790] get_zeroed_page (mm/page_alloc.c:2812)
[ 3162.934790] __pud_alloc (mm/memory.c:3844)
[ 3162.934790] __handle_mm_fault (include/linux/mm.h:1368 mm/memory.c:3728)
[ 3162.934790] handle_mm_fault (mm/memory.c:3819)
[ 3162.934790] __do_page_fault (arch/x86/mm/fault.c:1220)
[ 3162.934790] do_page_fault (arch/x86/mm/fault.c:1272 include/linux/jump_label.h:105 include/linux/context_tracking_state.h:27 include/linux/context_tracking.h:45 arch/x86/mm/fault.c:1273)
[ 3162.934790] do_async_page_fault (arch/x86/kernel/kvm.c:263)
[ 3162.934790] async_page_fault (arch/x86/kernel/entry_64.S:1496)
[ 3162.934790] clear_user (arch/x86/lib/usercopy_64.c:54)
[ 3162.934790] padzero (fs/binfmt_elf.c:122)
[ 3162.934790] load_elf_binary (fs/binfmt_elf.c:894 (discriminator 1))
[ 3162.934790] search_binary_handler (fs/exec.c:1392)
[ 3162.934790] do_execve_common.isra.19 (fs/exec.c:1429 fs/exec.c:1525)
[ 3162.934790] do_execve (fs/exec.c:1568)
[ 3162.934790] run_init_process (init/main.c:818)
[ 3162.934790] kernel_init (init/main.c:866)
[ 3162.934790] ret_from_fork (arch/x86/kernel/entry_64.S:555)
[ 3162.934790]     INITIAL USE at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:3142)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] down_write (arch/x86/include/asm/rwsem.h:130 kernel/locking/rwsem.c:50)
[ 3162.934790] do_execve_common.isra.19 (fs/exec.c:259 fs/exec.c:374 fs/exec.c:1496)
[ 3162.934790] do_execve (fs/exec.c:1568)
[ 3162.934790] run_init_process (init/main.c:818)
[ 3162.934790] kernel_init (init/main.c:866)
[ 3162.934790] ret_from_fork (arch/x86/kernel/entry_64.S:555)
[ 3162.934790]   }
[ 3162.934790] ... key at: __key.50836 (??:?)
[ 3162.934790]   ... acquired at:
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] might_fault (mm/memory.c:4214)
[ 3162.934790] filldir (arch/x86/include/asm/uaccess.h:731 fs/readdir.c:176)
[ 3162.934790] xfs_dir2_sf_getdents (fs/xfs/xfs_dir2_readdir.c:131)
[ 3162.934790] xfs_readdir (fs/xfs/xfs_dir2_readdir.c:689)
[ 3162.934790] xfs_file_readdir (fs/xfs/xfs_file.c:977)
[ 3162.934790] iterate_dir (fs/readdir.c:42)
[ 3162.934790] SyS_getdents (fs/readdir.c:214 fs/readdir.c:193)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]
[ 3162.934790] -> (&xfs_dir_ilock_class){++++..} ops: 6 {
[ 3162.934790]    HARDIRQ-ON-W at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2800 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] down_write_nested (arch/x86/include/asm/rwsem.h:130 kernel/locking/rwsem.c:143)
[ 3162.934790] xfs_ilock (fs/xfs/mrlock.h:50 fs/xfs/xfs_inode.c:165)
[ 3162.934790] xfs_vn_update_time (fs/xfs/xfs_iops.c:944)
[ 3162.934790] update_time (fs/inode.c:1502)
[ 3162.934790] touch_atime (fs/inode.c:1565)
[ 3162.934790] iterate_dir (include/linux/fs.h:1815 fs/readdir.c:43)
[ 3162.934790] SyS_getdents (fs/readdir.c:214 fs/readdir.c:193)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]    HARDIRQ-ON-R at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2792 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] down_read_nested (arch/x86/include/asm/rwsem.h:83 kernel/locking/rwsem.c:114)
[ 3162.934790] xfs_ilock (fs/xfs/mrlock.h:43 fs/xfs/xfs_inode.c:167)
[ 3162.934790] xfs_ilock_data_map_shared (fs/xfs/xfs_inode.c:106)
[ 3162.934790] xfs_lookup (fs/xfs/xfs_inode.c:589)
[ 3162.934790] xfs_vn_lookup (fs/xfs/xfs_iops.c:230)
[ 3162.934790] lookup_real (fs/namei.c:1325)
[ 3162.934790] __lookup_hash (fs/namei.c:1343)
[ 3162.934790] lookup_slow (fs/namei.c:1454)
[ 3162.934790] path_lookupat (fs/namei.c:1534 fs/namei.c:1904 fs/namei.c:1938)
[ 3162.934790] filename_lookup (fs/namei.c:1978)
[ 3162.934790] user_path_at_empty (fs/namei.c:2126)
[ 3162.934790] user_path_at (fs/namei.c:2137)
[ 3162.934790] vfs_fstatat (fs/stat.c:107)
[ 3162.934790] vfs_stat (fs/stat.c:124)
[ 3162.934790] SYSC_newstat (fs/stat.c:272)
[ 3162.934790] SyS_newstat (fs/stat.c:267)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]    SOFTIRQ-ON-W at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2804 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] down_write_nested (arch/x86/include/asm/rwsem.h:130 kernel/locking/rwsem.c:143)
[ 3162.934790] xfs_ilock (fs/xfs/mrlock.h:50 fs/xfs/xfs_inode.c:165)
[ 3162.934790] xfs_vn_update_time (fs/xfs/xfs_iops.c:944)
[ 3162.934790] update_time (fs/inode.c:1502)
[ 3162.934790] touch_atime (fs/inode.c:1565)
[ 3162.934790] iterate_dir (include/linux/fs.h:1815 fs/readdir.c:43)
[ 3162.934790] SyS_getdents (fs/readdir.c:214 fs/readdir.c:193)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]    SOFTIRQ-ON-R at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2804 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] down_read_nested (arch/x86/include/asm/rwsem.h:83 kernel/locking/rwsem.c:114)
[ 3162.934790] xfs_ilock (fs/xfs/mrlock.h:43 fs/xfs/xfs_inode.c:167)
[ 3162.934790] xfs_ilock_data_map_shared (fs/xfs/xfs_inode.c:106)
[ 3162.934790] xfs_lookup (fs/xfs/xfs_inode.c:589)
[ 3162.934790] xfs_vn_lookup (fs/xfs/xfs_iops.c:230)
[ 3162.934790] lookup_real (fs/namei.c:1325)
[ 3162.934790] __lookup_hash (fs/namei.c:1343)
[ 3162.934790] lookup_slow (fs/namei.c:1454)
[ 3162.934790] path_lookupat (fs/namei.c:1534 fs/namei.c:1904 fs/namei.c:1938)
[ 3162.934790] filename_lookup (fs/namei.c:1978)
[ 3162.934790] user_path_at_empty (fs/namei.c:2126)
[ 3162.934790] user_path_at (fs/namei.c:2137)
[ 3162.934790] vfs_fstatat (fs/stat.c:107)
[ 3162.934790] vfs_stat (fs/stat.c:124)
[ 3162.934790] SYSC_newstat (fs/stat.c:272)
[ 3162.934790] SyS_newstat (fs/stat.c:267)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]    INITIAL USE at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:3142)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] down_read_nested (arch/x86/include/asm/rwsem.h:83 kernel/locking/rwsem.c:114)
[ 3162.934790] xfs_ilock (fs/xfs/mrlock.h:43 fs/xfs/xfs_inode.c:167)
[ 3162.934790] xfs_ilock_data_map_shared (fs/xfs/xfs_inode.c:106)
[ 3162.934790] xfs_lookup (fs/xfs/xfs_inode.c:589)
[ 3162.934790] xfs_vn_lookup (fs/xfs/xfs_iops.c:230)
[ 3162.934790] lookup_real (fs/namei.c:1325)
[ 3162.934790] __lookup_hash (fs/namei.c:1343)
[ 3162.934790] lookup_slow (fs/namei.c:1454)
[ 3162.934790] path_lookupat (fs/namei.c:1534 fs/namei.c:1904 fs/namei.c:1938)
[ 3162.934790] filename_lookup (fs/namei.c:1978)
[ 3162.934790] user_path_at_empty (fs/namei.c:2126)
[ 3162.934790] user_path_at (fs/namei.c:2137)
[ 3162.934790] vfs_fstatat (fs/stat.c:107)
[ 3162.934790] vfs_stat (fs/stat.c:124)
[ 3162.934790] SYSC_newstat (fs/stat.c:272)
[ 3162.934790] SyS_newstat (fs/stat.c:267)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]  }
[ 3162.934790] ... key at: xfs_dir_ilock_class (??:?)
[ 3162.934790]  ... acquired at:
[ 3162.934790] check_irq_usage (kernel/locking/lockdep.c:1638)
[ 3162.934790] __lock_acquire (kernel/locking/lockdep_states.h:9 kernel/locking/lockdep.c:1844 kernel/locking/lockdep.c:1945 kernel/locking/lockdep.c:2131 kernel/locking/lockdep.c:3182)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] down_write_nested (arch/x86/include/asm/rwsem.h:130 kernel/locking/rwsem.c:143)
[ 3162.934790] xfs_ilock (fs/xfs/mrlock.h:50 fs/xfs/xfs_inode.c:165)
[ 3162.934790] xfs_vn_update_time (fs/xfs/xfs_iops.c:944)
[ 3162.934790] update_time (fs/inode.c:1502)
[ 3162.934790] touch_atime (fs/inode.c:1565)
[ 3162.934790] iterate_dir (include/linux/fs.h:1815 fs/readdir.c:43)
[ 3162.934790] SyS_getdents (fs/readdir.c:214 fs/readdir.c:193)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]
[ 3162.934790]
[ 3162.934790] stack backtrace:
[ 3162.934790] CPU: 0 PID: 17183 Comm: trinity-main Not tainted 3.14.0-next-20140408-sasha-00023-g06962b5 #384
[ 3162.934790]  ffffffff93a57840 ffff8804c55b5ad8 ffffffff9051ac81 0000000000000000
[ 3162.934790]  ffff8804c6b10d38 ffff8804c55b5be8 ffffffff8d1c0eb5 0000000000000000
[ 3162.934790]  ffff880400000001 0000000000000001 ffff8804c6b10000 ffffffff916c5d62
[ 3162.934790] Call Trace:
[ 3162.934790] dump_stack (lib/dump_stack.c:52)
[ 3162.934790] check_usage (kernel/locking/lockdep.c:1549 kernel/locking/lockdep.c:1580)
[ 3162.934790] ? save_stack_trace (arch/x86/kernel/stacktrace.c:64)
[ 3162.934790] ? check_usage_forwards (kernel/locking/lockdep.c:2371)
[ 3162.934790] check_irq_usage (kernel/locking/lockdep.c:1638)
[ 3162.934790] __lock_acquire (kernel/locking/lockdep_states.h:9 kernel/locking/lockdep.c:1844 kernel/locking/lockdep.c:1945 kernel/locking/lockdep.c:2131 kernel/locking/lockdep.c:3182)
[ 3162.934790] ? get_parent_ip (kernel/sched/core.c:2471)
[ 3162.934790] ? kmem_cache_alloc (include/linux/kmemleak.h:43 mm/slub.c:975 mm/slub.c:2468 mm/slub.c:2476 mm/slub.c:2481)
[ 3162.934790] ? get_parent_ip (kernel/sched/core.c:2471)
[ 3162.934790] ? preempt_count_sub (kernel/sched/core.c:2526)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] ? xfs_ilock (fs/xfs/mrlock.h:50 fs/xfs/xfs_inode.c:165)
[ 3162.934790] ? xfs_vn_update_time (fs/xfs/xfs_iops.c:944)
[ 3162.934790] down_write_nested (arch/x86/include/asm/rwsem.h:130 kernel/locking/rwsem.c:143)
[ 3162.934790] ? xfs_ilock (fs/xfs/mrlock.h:50 fs/xfs/xfs_inode.c:165)
[ 3162.934790] ? xfs_trans_reserve (fs/xfs/xfs_trans.c:221)
[ 3162.934790] xfs_ilock (fs/xfs/mrlock.h:50 fs/xfs/xfs_inode.c:165)
[ 3162.934790] xfs_vn_update_time (fs/xfs/xfs_iops.c:944)
[ 3162.934790] update_time (fs/inode.c:1502)
[ 3162.934790] ? __mnt_want_write (arch/x86/include/asm/preempt.h:98 fs/namespace.c:358)
[ 3162.934790] touch_atime (fs/inode.c:1565)
[ 3162.934790] iterate_dir (include/linux/fs.h:1815 fs/readdir.c:43)
[ 3162.934790] SyS_getdents (fs/readdir.c:214 fs/readdir.c:193)
[ 3162.934790] ? iterate_dir (fs/readdir.c:151)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs i_lock vs mmap_sem lockdep trace.
@ 2014-04-08 20:40         ` Sasha Levin
  0 siblings, 0 replies; 16+ messages in thread
From: Sasha Levin @ 2014-04-08 20:40 UTC (permalink / raw)
  To: Dave Chinner, Al Viro; +Cc: Dave Jones, Linux Kernel, xfs

On 03/30/2014 08:40 PM, Dave Chinner wrote:
> On Mon, Mar 31, 2014 at 12:57:17AM +0100, Al Viro wrote:
>> > On Mon, Mar 31, 2014 at 10:43:35AM +1100, Dave Chinner wrote:
>>> > > filldir on a directory inode vs page fault on regular file. Known
>>> > > issue, definitely a false positive. We have to change locking
>>> > > algorithms to avoid such deficiencies of lockdep (a case of "lockdep
>>> > > considered harmful", perhaps?) so it's not something I'm about to
>>> > > rush...
>> > 
>> > Give i_lock on directories a separate class, as it's been done for i_mutex...
> Already done that. Commit:
> 
> 93a8614 xfs: fix directory inode iolock lockdep false positive

Hi Dave,

The commit above introduces a new lockdep issue for me:

[ 3162.917171] ======================================================
[ 3162.920402] [ INFO: RECLAIM_FS-READ-safe -> RECLAIM_FS-READ-unsafe lock order detected ]
[ 3162.934790] 3.14.0-next-20140408-sasha-00023-g06962b5 #384 Not tainted
[ 3162.934790] ------------------------------------------------------
[ 3162.934790] trinity-main/17183 [HC0[0]:SC0[0]:HE1:SE1] is trying to acquire:
[ 3162.934790] (&xfs_dir_ilock_class){++++..}, at: xfs_ilock (fs/xfs/mrlock.h:50 fs/xfs/xfs_inode.c:165)
[ 3162.934790]
[ 3162.934790] and this task is already holding:
[ 3162.934790] (sb_internal){.+.+.?}, at: xfs_trans_alloc (fs/xfs/xfs_trans.c:67)
[ 3162.934790] which would create a new lock dependency:
[ 3162.934790]  (sb_internal){.+.+.?} -> (&xfs_dir_ilock_class){++++..}
[ 3162.934790]
[ 3162.934790] but this new dependency connects a RECLAIM_FS-READ-irq-safe lock:
[ 3162.934790]  (sb_internal){.+.+.?}
... which became RECLAIM_FS-READ-irq-safe at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2821 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] __sb_start_write (fs/super.c:1187)
[ 3162.934790] xfs_trans_alloc (fs/xfs/xfs_trans.c:67)
[ 3162.934790] xfs_iomap_write_allocate (fs/xfs/xfs_iomap.c:689)
[ 3162.934790] xfs_map_blocks (fs/xfs/xfs_aops.c:340)
[ 3162.934790] xfs_vm_writepage (fs/xfs/xfs_aops.c:1081)
[ 3162.934790] shrink_page_list (mm/vmscan.c:503 mm/vmscan.c:1015)
[ 3162.934790] shrink_inactive_list (include/linux/spinlock.h:328 mm/vmscan.c:1503)
[ 3162.934790] shrink_lruvec (mm/vmscan.c:1830 mm/vmscan.c:2054)
[ 3162.934790] shrink_zone (mm/vmscan.c:2235)
[ 3162.934790] kswapd_shrink_zone (include/linux/nodemask.h:131 include/linux/nodemask.h:131 mm/vmscan.c:2894)
[ 3162.934790] kswapd (mm/vmscan.c:3080 mm/vmscan.c:3286)
[ 3162.934790] kthread (kernel/kthread.c:210)
[ 3162.934790] ret_from_fork (arch/x86/kernel/entry_64.S:555)
[ 3162.934790]
[ 3162.934790] to a RECLAIM_FS-READ-irq-unsafe lock:
[ 3162.934790]  (&mm->mmap_sem){++++++}
... which became RECLAIM_FS-READ-irq-unsafe at:
[ 3162.934790] ... mark_held_locks (kernel/locking/lockdep.c:2523)
[ 3162.934790] lockdep_trace_alloc (kernel/locking/lockdep.c:2745 kernel/locking/lockdep.c:2760)
[ 3162.934790] __alloc_pages_nodemask (mm/page_alloc.c:2721)
[ 3162.934790] alloc_pages_current (mm/mempolicy.c:2131)
[ 3162.934790] pte_alloc_one (arch/x86/mm/pgtable.c:28)
[ 3162.934790] __pte_alloc (mm/memory.c:557)
[ 3162.934790] move_page_tables (mm/mremap.c:209 (discriminator 1))
[ 3162.934790] shift_arg_pages (fs/exec.c:607)
[ 3162.934790] setup_arg_pages (fs/exec.c:715)
[ 3162.934790] load_elf_binary (fs/binfmt_elf.c:745)
[ 3162.934790] search_binary_handler (fs/exec.c:1392)
[ 3162.934790] do_execve_common.isra.19 (fs/exec.c:1429 fs/exec.c:1525)
[ 3162.934790] do_execve (fs/exec.c:1568)
[ 3162.934790] run_init_process (init/main.c:818)
[ 3162.934790] kernel_init (init/main.c:866)
[ 3162.934790] ret_from_fork (arch/x86/kernel/entry_64.S:555)
[ 3162.934790]
[ 3162.934790] other info that might help us debug this:
[ 3162.934790]
[ 3162.934790] Chain exists of:
sb_internal --> &xfs_dir_ilock_class --> &mm->mmap_sem

[ 3162.934790]  Possible interrupt unsafe locking scenario:
[ 3162.934790]
[ 3162.934790]        CPU0                    CPU1
[ 3162.934790]        ----                    ----
[ 3162.934790]   lock(&mm->mmap_sem);
[ 3162.934790]                                local_irq_disable();
[ 3162.934790]                                lock(sb_internal);
[ 3162.934790]                                lock(&xfs_dir_ilock_class);
[ 3162.934790]   <Interrupt>
[ 3162.934790]     lock(sb_internal);
[ 3162.934790]
[ 3162.934790]  *** DEADLOCK ***
[ 3162.934790]
[ 3162.934790] 3 locks held by trinity-main/17183:
[ 3162.934790] #0: (&type->i_mutex_dir_key#12){+.+.+.}, at: iterate_dir (fs/readdir.c:35)
[ 3162.934790] #1: (sb_writers#34){.+.+.+}, at: touch_atime (fs/inode.c:1550)
[ 3162.934790] #2: (sb_internal){.+.+.?}, at: xfs_trans_alloc (fs/xfs/xfs_trans.c:67)
[ 3162.934790]
the dependencies between RECLAIM_FS-READ-irq-safe lock and the holding lock:
[ 3162.934790] -> (sb_internal){.+.+.?} ops: 1021 {
[ 3162.934790]    HARDIRQ-ON-R at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2792 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] __sb_start_write (fs/super.c:1187)
[ 3162.934790] xfs_trans_alloc (fs/xfs/xfs_trans.c:67)
[ 3162.934790] xfs_create (fs/xfs/xfs_inode.c:1191)
[ 3162.934790] xfs_vn_mknod (fs/xfs/xfs_iops.c:161)
[ 3162.934790] xfs_vn_create (fs/xfs/xfs_iops.c:204)
[ 3162.934790] vfs_create (fs/namei.c:2505)
[ 3162.934790] do_last (fs/namei.c:2844 fs/namei.c:2929)
[ 3162.934790] path_openat (fs/namei.c:3181)
[ 3162.934790] do_filp_open (fs/namei.c:3230)
[ 3162.934790] do_sys_open (fs/open.c:1014)
[ 3162.934790] SyS_open (fs/open.c:1026)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]    SOFTIRQ-ON-R at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2804 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] __sb_start_write (fs/super.c:1187)
[ 3162.934790] xfs_trans_alloc (fs/xfs/xfs_trans.c:67)
[ 3162.934790] xfs_create (fs/xfs/xfs_inode.c:1191)
[ 3162.934790] xfs_vn_mknod (fs/xfs/xfs_iops.c:161)
[ 3162.934790] xfs_vn_create (fs/xfs/xfs_iops.c:204)
[ 3162.934790] vfs_create (fs/namei.c:2505)
[ 3162.934790] do_last (fs/namei.c:2844 fs/namei.c:2929)
[ 3162.934790] path_openat (fs/namei.c:3181)
[ 3162.934790] do_filp_open (fs/namei.c:3230)
[ 3162.934790] do_sys_open (fs/open.c:1014)
[ 3162.934790] SyS_open (fs/open.c:1026)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]    IN-RECLAIM_FS-R at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2821 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] __sb_start_write (fs/super.c:1187)
[ 3162.934790] xfs_trans_alloc (fs/xfs/xfs_trans.c:67)
[ 3162.934790] xfs_iomap_write_allocate (fs/xfs/xfs_iomap.c:689)
[ 3162.934790] xfs_map_blocks (fs/xfs/xfs_aops.c:340)
[ 3162.934790] xfs_vm_writepage (fs/xfs/xfs_aops.c:1081)
[ 3162.934790] shrink_page_list (mm/vmscan.c:503 mm/vmscan.c:1015)
[ 3162.934790] shrink_inactive_list (include/linux/spinlock.h:328 mm/vmscan.c:1503)
[ 3162.934790] shrink_lruvec (mm/vmscan.c:1830 mm/vmscan.c:2054)
[ 3162.934790] shrink_zone (mm/vmscan.c:2235)
[ 3162.934790] kswapd_shrink_zone (include/linux/nodemask.h:131 include/linux/nodemask.h:131 mm/vmscan.c:2894)
[ 3162.934790] kswapd (mm/vmscan.c:3080 mm/vmscan.c:3286)
[ 3162.934790] kthread (kernel/kthread.c:210)
[ 3162.934790] ret_from_fork (arch/x86/kernel/entry_64.S:555)
[ 3162.934790]    RECLAIM_FS-ON-R at:
[ 3162.934790] mark_held_locks (kernel/locking/lockdep.c:2523)
[ 3162.934790] lockdep_trace_alloc (kernel/locking/lockdep.c:2745 kernel/locking/lockdep.c:2760)
[ 3162.934790] kmem_cache_alloc (mm/slub.c:965 mm/slub.c:2403 mm/slub.c:2476 mm/slub.c:2481)
[ 3162.934790] kmem_zone_alloc (fs/xfs/kmem.c:130)
[ 3162.934790] _xfs_trans_alloc (fs/xfs/xfs_trans.c:87 (discriminator 2))
[ 3162.934790] xfs_trans_alloc (fs/xfs/xfs_trans.c:68)
[ 3162.934790] xfs_create (fs/xfs/xfs_inode.c:1191)
[ 3162.934790] xfs_vn_mknod (fs/xfs/xfs_iops.c:161)
[ 3162.934790] xfs_vn_create (fs/xfs/xfs_iops.c:204)
[ 3162.934790] vfs_create (fs/namei.c:2505)
[ 3162.934790] do_last (fs/namei.c:2844 fs/namei.c:2929)
[ 3162.934790] path_openat (fs/namei.c:3181)
[ 3162.934790] do_filp_open (fs/namei.c:3230)
[ 3162.934790] do_sys_open (fs/open.c:1014)
[ 3162.934790] SyS_open (fs/open.c:1026)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]    INITIAL USE at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:3142)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] __sb_start_write (fs/super.c:1187)
[ 3162.934790] xfs_trans_alloc (fs/xfs/xfs_trans.c:67)
[ 3162.934790] xfs_create (fs/xfs/xfs_inode.c:1191)
[ 3162.934790] xfs_vn_mknod (fs/xfs/xfs_iops.c:161)
[ 3162.934790] xfs_vn_create (fs/xfs/xfs_iops.c:204)
[ 3162.934790] vfs_create (fs/namei.c:2505)
[ 3162.934790] do_last (fs/namei.c:2844 fs/namei.c:2929)
[ 3162.934790] path_openat (fs/namei.c:3181)
[ 3162.934790] do_filp_open (fs/namei.c:3230)
[ 3162.934790] do_sys_open (fs/open.c:1014)
[ 3162.934790] SyS_open (fs/open.c:1026)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]  }
[ 3162.934790] ... key at: xfs_fs_type (??:?)
[ 3162.934790]  ... acquired at:
[ 3162.934790] check_irq_usage (kernel/locking/lockdep.c:1638)
[ 3162.934790] __lock_acquire (kernel/locking/lockdep_states.h:9 kernel/locking/lockdep.c:1844 kernel/locking/lockdep.c:1945 kernel/locking/lockdep.c:2131 kernel/locking/lockdep.c:3182)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] down_write_nested (arch/x86/include/asm/rwsem.h:130 kernel/locking/rwsem.c:143)
[ 3162.934790] xfs_ilock (fs/xfs/mrlock.h:50 fs/xfs/xfs_inode.c:165)
[ 3162.934790] xfs_vn_update_time (fs/xfs/xfs_iops.c:944)
[ 3162.934790] update_time (fs/inode.c:1502)
[ 3162.934790] touch_atime (fs/inode.c:1565)
[ 3162.934790] iterate_dir (include/linux/fs.h:1815 fs/readdir.c:43)
[ 3162.934790] SyS_getdents (fs/readdir.c:214 fs/readdir.c:193)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]
[ 3162.934790]
the dependencies between the lock to be acquired and RECLAIM_FS-READ-irq-unsafe lock:
[ 3162.934790]  -> (&mm->mmap_sem){++++++} ops: 217938150 {
[ 3162.934790]     HARDIRQ-ON-W at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2800 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] down_write (arch/x86/include/asm/rwsem.h:130 kernel/locking/rwsem.c:50)
[ 3162.934790] do_execve_common.isra.19 (fs/exec.c:259 fs/exec.c:374 fs/exec.c:1496)
[ 3162.934790] do_execve (fs/exec.c:1568)
[ 3162.934790] run_init_process (init/main.c:818)
[ 3162.934790] kernel_init (init/main.c:866)
[ 3162.934790] ret_from_fork (arch/x86/kernel/entry_64.S:555)
[ 3162.934790]     HARDIRQ-ON-R at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2792 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] might_fault (mm/memory.c:4214)
[ 3162.934790] __clear_user (arch/x86/lib/usercopy_64.c:18 arch/x86/lib/usercopy_64.c:21)
[ 3162.934790] clear_user (arch/x86/lib/usercopy_64.c:54)
[ 3162.934790] padzero (fs/binfmt_elf.c:122)
[ 3162.934790] load_elf_binary (fs/binfmt_elf.c:894 (discriminator 1))
[ 3162.934790] search_binary_handler (fs/exec.c:1392)
[ 3162.934790] do_execve_common.isra.19 (fs/exec.c:1429 fs/exec.c:1525)
[ 3162.934790] do_execve (fs/exec.c:1568)
[ 3162.934790] run_init_process (init/main.c:818)
[ 3162.934790] kernel_init (init/main.c:866)
[ 3162.934790] ret_from_fork (arch/x86/kernel/entry_64.S:555)
[ 3162.934790]     SOFTIRQ-ON-W at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2804 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] down_write (arch/x86/include/asm/rwsem.h:130 kernel/locking/rwsem.c:50)
[ 3162.934790] do_execve_common.isra.19 (fs/exec.c:259 fs/exec.c:374 fs/exec.c:1496)
[ 3162.934790] do_execve (fs/exec.c:1568)
[ 3162.934790] run_init_process (init/main.c:818)
[ 3162.934790] kernel_init (init/main.c:866)
[ 3162.934790] ret_from_fork (arch/x86/kernel/entry_64.S:555)
[ 3162.934790]     SOFTIRQ-ON-R at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2804 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] might_fault (mm/memory.c:4214)
[ 3162.934790] __clear_user (arch/x86/lib/usercopy_64.c:18 arch/x86/lib/usercopy_64.c:21)
[ 3162.934790] clear_user (arch/x86/lib/usercopy_64.c:54)
[ 3162.934790] padzero (fs/binfmt_elf.c:122)
[ 3162.934790] load_elf_binary (fs/binfmt_elf.c:894 (discriminator 1))
[ 3162.934790] search_binary_handler (fs/exec.c:1392)
[ 3162.934790] do_execve_common.isra.19 (fs/exec.c:1429 fs/exec.c:1525)
[ 3162.934790] do_execve (fs/exec.c:1568)
[ 3162.934790] run_init_process (init/main.c:818)
[ 3162.934790] kernel_init (init/main.c:866)
[ 3162.934790] ret_from_fork (arch/x86/kernel/entry_64.S:555)
[ 3162.934790]     RECLAIM_FS-ON-W at:
[ 3162.934790] mark_held_locks (kernel/locking/lockdep.c:2523)
[ 3162.934790] lockdep_trace_alloc (kernel/locking/lockdep.c:2745 kernel/locking/lockdep.c:2760)
[ 3162.934790] __alloc_pages_nodemask (mm/page_alloc.c:2721)
[ 3162.934790] alloc_pages_current (mm/mempolicy.c:2131)
[ 3162.934790] pte_alloc_one (arch/x86/mm/pgtable.c:28)
[ 3162.934790] __pte_alloc (mm/memory.c:557)
[ 3162.934790] move_page_tables (mm/mremap.c:209 (discriminator 1))
[ 3162.934790] shift_arg_pages (fs/exec.c:607)
[ 3162.934790] setup_arg_pages (fs/exec.c:715)
[ 3162.934790] load_elf_binary (fs/binfmt_elf.c:745)
[ 3162.934790] search_binary_handler (fs/exec.c:1392)
[ 3162.934790] do_execve_common.isra.19 (fs/exec.c:1429 fs/exec.c:1525)
[ 3162.934790] do_execve (fs/exec.c:1568)
[ 3162.934790] run_init_process (init/main.c:818)
[ 3162.934790] kernel_init (init/main.c:866)
[ 3162.934790] ret_from_fork (arch/x86/kernel/entry_64.S:555)
[ 3162.934790]     RECLAIM_FS-ON-R at:
[ 3162.934790] mark_held_locks (kernel/locking/lockdep.c:2523)
[ 3162.934790] lockdep_trace_alloc (kernel/locking/lockdep.c:2745 kernel/locking/lockdep.c:2760)
[ 3162.934790] __alloc_pages_nodemask (mm/page_alloc.c:2721)
[ 3162.934790] alloc_pages_current (mm/mempolicy.c:2131)
[ 3162.934790] __get_free_pages (mm/page_alloc.c:2803)
[ 3162.934790] get_zeroed_page (mm/page_alloc.c:2812)
[ 3162.934790] __pud_alloc (mm/memory.c:3844)
[ 3162.934790] __handle_mm_fault (include/linux/mm.h:1368 mm/memory.c:3728)
[ 3162.934790] handle_mm_fault (mm/memory.c:3819)
[ 3162.934790] __do_page_fault (arch/x86/mm/fault.c:1220)
[ 3162.934790] do_page_fault (arch/x86/mm/fault.c:1272 include/linux/jump_label.h:105 include/linux/context_tracking_state.h:27 include/linux/context_tracking.h:45 arch/x86/mm/fault.c:1273)
[ 3162.934790] do_async_page_fault (arch/x86/kernel/kvm.c:263)
[ 3162.934790] async_page_fault (arch/x86/kernel/entry_64.S:1496)
[ 3162.934790] clear_user (arch/x86/lib/usercopy_64.c:54)
[ 3162.934790] padzero (fs/binfmt_elf.c:122)
[ 3162.934790] load_elf_binary (fs/binfmt_elf.c:894 (discriminator 1))
[ 3162.934790] search_binary_handler (fs/exec.c:1392)
[ 3162.934790] do_execve_common.isra.19 (fs/exec.c:1429 fs/exec.c:1525)
[ 3162.934790] do_execve (fs/exec.c:1568)
[ 3162.934790] run_init_process (init/main.c:818)
[ 3162.934790] kernel_init (init/main.c:866)
[ 3162.934790] ret_from_fork (arch/x86/kernel/entry_64.S:555)
[ 3162.934790]     INITIAL USE at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:3142)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] down_write (arch/x86/include/asm/rwsem.h:130 kernel/locking/rwsem.c:50)
[ 3162.934790] do_execve_common.isra.19 (fs/exec.c:259 fs/exec.c:374 fs/exec.c:1496)
[ 3162.934790] do_execve (fs/exec.c:1568)
[ 3162.934790] run_init_process (init/main.c:818)
[ 3162.934790] kernel_init (init/main.c:866)
[ 3162.934790] ret_from_fork (arch/x86/kernel/entry_64.S:555)
[ 3162.934790]   }
[ 3162.934790] ... key at: __key.50836 (??:?)
[ 3162.934790]   ... acquired at:
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] might_fault (mm/memory.c:4214)
[ 3162.934790] filldir (arch/x86/include/asm/uaccess.h:731 fs/readdir.c:176)
[ 3162.934790] xfs_dir2_sf_getdents (fs/xfs/xfs_dir2_readdir.c:131)
[ 3162.934790] xfs_readdir (fs/xfs/xfs_dir2_readdir.c:689)
[ 3162.934790] xfs_file_readdir (fs/xfs/xfs_file.c:977)
[ 3162.934790] iterate_dir (fs/readdir.c:42)
[ 3162.934790] SyS_getdents (fs/readdir.c:214 fs/readdir.c:193)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]
[ 3162.934790] -> (&xfs_dir_ilock_class){++++..} ops: 6 {
[ 3162.934790]    HARDIRQ-ON-W at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2800 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] down_write_nested (arch/x86/include/asm/rwsem.h:130 kernel/locking/rwsem.c:143)
[ 3162.934790] xfs_ilock (fs/xfs/mrlock.h:50 fs/xfs/xfs_inode.c:165)
[ 3162.934790] xfs_vn_update_time (fs/xfs/xfs_iops.c:944)
[ 3162.934790] update_time (fs/inode.c:1502)
[ 3162.934790] touch_atime (fs/inode.c:1565)
[ 3162.934790] iterate_dir (include/linux/fs.h:1815 fs/readdir.c:43)
[ 3162.934790] SyS_getdents (fs/readdir.c:214 fs/readdir.c:193)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]    HARDIRQ-ON-R at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2792 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] down_read_nested (arch/x86/include/asm/rwsem.h:83 kernel/locking/rwsem.c:114)
[ 3162.934790] xfs_ilock (fs/xfs/mrlock.h:43 fs/xfs/xfs_inode.c:167)
[ 3162.934790] xfs_ilock_data_map_shared (fs/xfs/xfs_inode.c:106)
[ 3162.934790] xfs_lookup (fs/xfs/xfs_inode.c:589)
[ 3162.934790] xfs_vn_lookup (fs/xfs/xfs_iops.c:230)
[ 3162.934790] lookup_real (fs/namei.c:1325)
[ 3162.934790] __lookup_hash (fs/namei.c:1343)
[ 3162.934790] lookup_slow (fs/namei.c:1454)
[ 3162.934790] path_lookupat (fs/namei.c:1534 fs/namei.c:1904 fs/namei.c:1938)
[ 3162.934790] filename_lookup (fs/namei.c:1978)
[ 3162.934790] user_path_at_empty (fs/namei.c:2126)
[ 3162.934790] user_path_at (fs/namei.c:2137)
[ 3162.934790] vfs_fstatat (fs/stat.c:107)
[ 3162.934790] vfs_stat (fs/stat.c:124)
[ 3162.934790] SYSC_newstat (fs/stat.c:272)
[ 3162.934790] SyS_newstat (fs/stat.c:267)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]    SOFTIRQ-ON-W at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2804 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] down_write_nested (arch/x86/include/asm/rwsem.h:130 kernel/locking/rwsem.c:143)
[ 3162.934790] xfs_ilock (fs/xfs/mrlock.h:50 fs/xfs/xfs_inode.c:165)
[ 3162.934790] xfs_vn_update_time (fs/xfs/xfs_iops.c:944)
[ 3162.934790] update_time (fs/inode.c:1502)
[ 3162.934790] touch_atime (fs/inode.c:1565)
[ 3162.934790] iterate_dir (include/linux/fs.h:1815 fs/readdir.c:43)
[ 3162.934790] SyS_getdents (fs/readdir.c:214 fs/readdir.c:193)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]    SOFTIRQ-ON-R at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:2804 kernel/locking/lockdep.c:3138)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] down_read_nested (arch/x86/include/asm/rwsem.h:83 kernel/locking/rwsem.c:114)
[ 3162.934790] xfs_ilock (fs/xfs/mrlock.h:43 fs/xfs/xfs_inode.c:167)
[ 3162.934790] xfs_ilock_data_map_shared (fs/xfs/xfs_inode.c:106)
[ 3162.934790] xfs_lookup (fs/xfs/xfs_inode.c:589)
[ 3162.934790] xfs_vn_lookup (fs/xfs/xfs_iops.c:230)
[ 3162.934790] lookup_real (fs/namei.c:1325)
[ 3162.934790] __lookup_hash (fs/namei.c:1343)
[ 3162.934790] lookup_slow (fs/namei.c:1454)
[ 3162.934790] path_lookupat (fs/namei.c:1534 fs/namei.c:1904 fs/namei.c:1938)
[ 3162.934790] filename_lookup (fs/namei.c:1978)
[ 3162.934790] user_path_at_empty (fs/namei.c:2126)
[ 3162.934790] user_path_at (fs/namei.c:2137)
[ 3162.934790] vfs_fstatat (fs/stat.c:107)
[ 3162.934790] vfs_stat (fs/stat.c:124)
[ 3162.934790] SYSC_newstat (fs/stat.c:272)
[ 3162.934790] SyS_newstat (fs/stat.c:267)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]    INITIAL USE at:
[ 3162.934790] __lock_acquire (kernel/locking/lockdep.c:3142)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] down_read_nested (arch/x86/include/asm/rwsem.h:83 kernel/locking/rwsem.c:114)
[ 3162.934790] xfs_ilock (fs/xfs/mrlock.h:43 fs/xfs/xfs_inode.c:167)
[ 3162.934790] xfs_ilock_data_map_shared (fs/xfs/xfs_inode.c:106)
[ 3162.934790] xfs_lookup (fs/xfs/xfs_inode.c:589)
[ 3162.934790] xfs_vn_lookup (fs/xfs/xfs_iops.c:230)
[ 3162.934790] lookup_real (fs/namei.c:1325)
[ 3162.934790] __lookup_hash (fs/namei.c:1343)
[ 3162.934790] lookup_slow (fs/namei.c:1454)
[ 3162.934790] path_lookupat (fs/namei.c:1534 fs/namei.c:1904 fs/namei.c:1938)
[ 3162.934790] filename_lookup (fs/namei.c:1978)
[ 3162.934790] user_path_at_empty (fs/namei.c:2126)
[ 3162.934790] user_path_at (fs/namei.c:2137)
[ 3162.934790] vfs_fstatat (fs/stat.c:107)
[ 3162.934790] vfs_stat (fs/stat.c:124)
[ 3162.934790] SYSC_newstat (fs/stat.c:272)
[ 3162.934790] SyS_newstat (fs/stat.c:267)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]  }
[ 3162.934790] ... key at: xfs_dir_ilock_class (??:?)
[ 3162.934790]  ... acquired at:
[ 3162.934790] check_irq_usage (kernel/locking/lockdep.c:1638)
[ 3162.934790] __lock_acquire (kernel/locking/lockdep_states.h:9 kernel/locking/lockdep.c:1844 kernel/locking/lockdep.c:1945 kernel/locking/lockdep.c:2131 kernel/locking/lockdep.c:3182)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] down_write_nested (arch/x86/include/asm/rwsem.h:130 kernel/locking/rwsem.c:143)
[ 3162.934790] xfs_ilock (fs/xfs/mrlock.h:50 fs/xfs/xfs_inode.c:165)
[ 3162.934790] xfs_vn_update_time (fs/xfs/xfs_iops.c:944)
[ 3162.934790] update_time (fs/inode.c:1502)
[ 3162.934790] touch_atime (fs/inode.c:1565)
[ 3162.934790] iterate_dir (include/linux/fs.h:1815 fs/readdir.c:43)
[ 3162.934790] SyS_getdents (fs/readdir.c:214 fs/readdir.c:193)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)
[ 3162.934790]
[ 3162.934790]
[ 3162.934790] stack backtrace:
[ 3162.934790] CPU: 0 PID: 17183 Comm: trinity-main Not tainted 3.14.0-next-20140408-sasha-00023-g06962b5 #384
[ 3162.934790]  ffffffff93a57840 ffff8804c55b5ad8 ffffffff9051ac81 0000000000000000
[ 3162.934790]  ffff8804c6b10d38 ffff8804c55b5be8 ffffffff8d1c0eb5 0000000000000000
[ 3162.934790]  ffff880400000001 0000000000000001 ffff8804c6b10000 ffffffff916c5d62
[ 3162.934790] Call Trace:
[ 3162.934790] dump_stack (lib/dump_stack.c:52)
[ 3162.934790] check_usage (kernel/locking/lockdep.c:1549 kernel/locking/lockdep.c:1580)
[ 3162.934790] ? save_stack_trace (arch/x86/kernel/stacktrace.c:64)
[ 3162.934790] ? check_usage_forwards (kernel/locking/lockdep.c:2371)
[ 3162.934790] check_irq_usage (kernel/locking/lockdep.c:1638)
[ 3162.934790] __lock_acquire (kernel/locking/lockdep_states.h:9 kernel/locking/lockdep.c:1844 kernel/locking/lockdep.c:1945 kernel/locking/lockdep.c:2131 kernel/locking/lockdep.c:3182)
[ 3162.934790] ? get_parent_ip (kernel/sched/core.c:2471)
[ 3162.934790] ? kmem_cache_alloc (include/linux/kmemleak.h:43 mm/slub.c:975 mm/slub.c:2468 mm/slub.c:2476 mm/slub.c:2481)
[ 3162.934790] ? get_parent_ip (kernel/sched/core.c:2471)
[ 3162.934790] ? preempt_count_sub (kernel/sched/core.c:2526)
[ 3162.934790] lock_acquire (arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602)
[ 3162.934790] ? xfs_ilock (fs/xfs/mrlock.h:50 fs/xfs/xfs_inode.c:165)
[ 3162.934790] ? xfs_vn_update_time (fs/xfs/xfs_iops.c:944)
[ 3162.934790] down_write_nested (arch/x86/include/asm/rwsem.h:130 kernel/locking/rwsem.c:143)
[ 3162.934790] ? xfs_ilock (fs/xfs/mrlock.h:50 fs/xfs/xfs_inode.c:165)
[ 3162.934790] ? xfs_trans_reserve (fs/xfs/xfs_trans.c:221)
[ 3162.934790] xfs_ilock (fs/xfs/mrlock.h:50 fs/xfs/xfs_inode.c:165)
[ 3162.934790] xfs_vn_update_time (fs/xfs/xfs_iops.c:944)
[ 3162.934790] update_time (fs/inode.c:1502)
[ 3162.934790] ? __mnt_want_write (arch/x86/include/asm/preempt.h:98 fs/namespace.c:358)
[ 3162.934790] touch_atime (fs/inode.c:1565)
[ 3162.934790] iterate_dir (include/linux/fs.h:1815 fs/readdir.c:43)
[ 3162.934790] SyS_getdents (fs/readdir.c:214 fs/readdir.c:193)
[ 3162.934790] ? iterate_dir (fs/readdir.c:151)
[ 3162.934790] tracesys (arch/x86/kernel/entry_64.S:749)


Thanks,
Sasha

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs i_lock vs mmap_sem lockdep trace.
  2014-04-08 20:40         ` Sasha Levin
@ 2014-04-10 22:52           ` Dave Chinner
  -1 siblings, 0 replies; 16+ messages in thread
From: Dave Chinner @ 2014-04-10 22:52 UTC (permalink / raw)
  To: Sasha Levin; +Cc: Al Viro, Dave Jones, Linux Kernel, xfs

On Tue, Apr 08, 2014 at 04:40:32PM -0400, Sasha Levin wrote:
> On 03/30/2014 08:40 PM, Dave Chinner wrote:
> > On Mon, Mar 31, 2014 at 12:57:17AM +0100, Al Viro wrote:
> >> > On Mon, Mar 31, 2014 at 10:43:35AM +1100, Dave Chinner wrote:
> >>> > > filldir on a directory inode vs page fault on regular file. Known
> >>> > > issue, definitely a false positive. We have to change locking
> >>> > > algorithms to avoid such deficiencies of lockdep (a case of "lockdep
> >>> > > considered harmful", perhaps?) so it's not something I'm about to
> >>> > > rush...
> >> > 
> >> > Give i_lock on directories a separate class, as it's been done for i_mutex...
> > Already done that. Commit:
> > 
> > 93a8614 xfs: fix directory inode iolock lockdep false positive
> 
> Hi Dave,
> 
> The commit above introduces a new lockdep issue for me:

Right, it's fixed one class of false positive which uncovers the
next class of false positive. lockdep gives me the shits at times
because every time we make some obviously correct locking change
to the XFS inodes we then spend the next 3-4 kernel releases chasing
down and annotating locks so that lockdep doesn't throw false
positives everywhere.

It's on my queue of things to fix but, quite frankly, lockdep false
positives are low priority to fix right now.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: xfs i_lock vs mmap_sem lockdep trace.
@ 2014-04-10 22:52           ` Dave Chinner
  0 siblings, 0 replies; 16+ messages in thread
From: Dave Chinner @ 2014-04-10 22:52 UTC (permalink / raw)
  To: Sasha Levin; +Cc: Dave Jones, xfs, Al Viro, Linux Kernel

On Tue, Apr 08, 2014 at 04:40:32PM -0400, Sasha Levin wrote:
> On 03/30/2014 08:40 PM, Dave Chinner wrote:
> > On Mon, Mar 31, 2014 at 12:57:17AM +0100, Al Viro wrote:
> >> > On Mon, Mar 31, 2014 at 10:43:35AM +1100, Dave Chinner wrote:
> >>> > > filldir on a directory inode vs page fault on regular file. Known
> >>> > > issue, definitely a false positive. We have to change locking
> >>> > > algorithms to avoid such deficiencies of lockdep (a case of "lockdep
> >>> > > considered harmful", perhaps?) so it's not something I'm about to
> >>> > > rush...
> >> > 
> >> > Give i_lock on directories a separate class, as it's been done for i_mutex...
> > Already done that. Commit:
> > 
> > 93a8614 xfs: fix directory inode iolock lockdep false positive
> 
> Hi Dave,
> 
> The commit above introduces a new lockdep issue for me:

Right, it's fixed one class of false positive which uncovers the
next class of false positive. lockdep gives me the shits at times
because every time we make some obviously correct locking change
to the XFS inodes we then spend the next 3-4 kernel releases chasing
down and annotating locks so that lockdep doesn't throw false
positives everywhere.

It's on my queue of things to fix but, quite frankly, lockdep false
positives are low priority to fix right now.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2014-04-10 22:52 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-03-29 22:31 xfs i_lock vs mmap_sem lockdep trace Dave Jones
2014-03-29 22:31 ` Dave Jones
2014-03-30 23:43 ` Dave Chinner
2014-03-30 23:43   ` Dave Chinner
2014-03-30 23:57   ` Al Viro
2014-03-30 23:57     ` Al Viro
2014-03-31  0:40     ` Dave Chinner
2014-03-31  0:40       ` Dave Chinner
2014-04-08 20:40       ` Sasha Levin
2014-04-08 20:40         ` Sasha Levin
2014-04-10 22:52         ` Dave Chinner
2014-04-10 22:52           ` Dave Chinner
2014-03-31  0:20   ` Dave Jones
2014-03-31  0:20     ` Dave Jones
2014-03-31  0:42     ` Dave Chinner
2014-03-31  0:42       ` Dave Chinner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.