linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* XFS WARN_ON in xfs_vm_writepage
@ 2014-06-13  5:16 Dave Jones
  2014-06-13  6:26 ` Dave Chinner
  0 siblings, 1 reply; 6+ messages in thread
From: Dave Jones @ 2014-06-13  5:16 UTC (permalink / raw)
  To: xfs; +Cc: Linux Kernel

Just hit this on Linus' tree from earlier this afternoon..

WARNING: CPU: 3 PID: 19721 at fs/xfs/xfs_aops.c:971 xfs_vm_writepage+0x5ce/0x630 [xfs]()
CPU: 3 PID: 19721 Comm: trinity-c61 Not tainted 3.15.0+ #3
 0000000000000009 000000004f70ab82 ffff8801d5ebf578 ffffffff8373215c
 0000000000000000 ffff8801d5ebf5b0 ffffffff8306f7cd ffff88023dd543e0
 ffffea000254a3c0 ffff8801d5ebf820 ffffea000254a3e0 ffff8801d5ebf728
Call Trace:
 [<ffffffff8373215c>] dump_stack+0x4e/0x7a
 [<ffffffff8306f7cd>] warn_slowpath_common+0x7d/0xa0
 [<ffffffff8306f8fa>] warn_slowpath_null+0x1a/0x20
 [<ffffffffc023068e>] xfs_vm_writepage+0x5ce/0x630 [xfs]
 [<ffffffff8373f1ab>] ? preempt_count_sub+0xab/0x100
 [<ffffffff83347315>] ? __percpu_counter_add+0x85/0xc0
 [<ffffffff8316f759>] shrink_page_list+0x8f9/0xb90
 [<ffffffff83170123>] shrink_inactive_list+0x253/0x510
 [<ffffffff83170c93>] shrink_lruvec+0x563/0x6c0
 [<ffffffff83170e2b>] shrink_zone+0x3b/0x100
 [<ffffffff831710e1>] shrink_zones+0x1f1/0x3c0
 [<ffffffff83171414>] try_to_free_pages+0x164/0x380
 [<ffffffff83163e52>] __alloc_pages_nodemask+0x822/0xc90
 [<ffffffff83169eb2>] ? pagevec_lru_move_fn+0x122/0x140
 [<ffffffff831abeff>] alloc_pages_vma+0xaf/0x1c0
 [<ffffffff8318a931>] handle_mm_fault+0xa31/0xc50
 [<ffffffff831845c0>] ? follow_page_mask+0x1f0/0x320
 [<ffffffff8318491b>] __get_user_pages+0x22b/0x660
 [<ffffffff831b5093>] ? kmem_cache_alloc+0x183/0x210
 [<ffffffff8318ce7e>] __mlock_vma_pages_range+0x9e/0xd0
 [<ffffffff8318d6ba>] __mm_populate+0xca/0x180
 [<ffffffff83179033>] vm_mmap_pgoff+0xd3/0xe0
 [<ffffffff8318fbd6>] SyS_mmap_pgoff+0x116/0x2c0
 [<ffffffff83011ced>] ? syscall_trace_enter+0x14d/0x2a0
 [<ffffffff830084c2>] SyS_mmap+0x22/0x30
 [<ffffffff837436ef>] tracesys+0xdd/0xe2


 970         if (WARN_ON_ONCE((current->flags & (PF_MEMALLOC|PF_KSWAPD)) ==
 971                         PF_MEMALLOC))


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: XFS WARN_ON in xfs_vm_writepage
  2014-06-13  5:16 XFS WARN_ON in xfs_vm_writepage Dave Jones
@ 2014-06-13  6:26 ` Dave Chinner
  2014-06-13 14:19   ` Dave Jones
  0 siblings, 1 reply; 6+ messages in thread
From: Dave Chinner @ 2014-06-13  6:26 UTC (permalink / raw)
  To: Dave Jones, xfs, Linux Kernel; +Cc: linux-mm

[cc linux-mm]

On Fri, Jun 13, 2014 at 01:16:31AM -0400, Dave Jones wrote:
> Just hit this on Linus' tree from earlier this afternoon..
> 
> WARNING: CPU: 3 PID: 19721 at fs/xfs/xfs_aops.c:971 xfs_vm_writepage+0x5ce/0x630 [xfs]()
> CPU: 3 PID: 19721 Comm: trinity-c61 Not tainted 3.15.0+ #3
>  0000000000000009 000000004f70ab82 ffff8801d5ebf578 ffffffff8373215c
>  0000000000000000 ffff8801d5ebf5b0 ffffffff8306f7cd ffff88023dd543e0
>  ffffea000254a3c0 ffff8801d5ebf820 ffffea000254a3e0 ffff8801d5ebf728
> Call Trace:
>  [<ffffffff8373215c>] dump_stack+0x4e/0x7a
>  [<ffffffff8306f7cd>] warn_slowpath_common+0x7d/0xa0
>  [<ffffffff8306f8fa>] warn_slowpath_null+0x1a/0x20
>  [<ffffffffc023068e>] xfs_vm_writepage+0x5ce/0x630 [xfs]
>  [<ffffffff8373f1ab>] ? preempt_count_sub+0xab/0x100
>  [<ffffffff83347315>] ? __percpu_counter_add+0x85/0xc0
>  [<ffffffff8316f759>] shrink_page_list+0x8f9/0xb90
>  [<ffffffff83170123>] shrink_inactive_list+0x253/0x510
>  [<ffffffff83170c93>] shrink_lruvec+0x563/0x6c0
>  [<ffffffff83170e2b>] shrink_zone+0x3b/0x100
>  [<ffffffff831710e1>] shrink_zones+0x1f1/0x3c0
>  [<ffffffff83171414>] try_to_free_pages+0x164/0x380
>  [<ffffffff83163e52>] __alloc_pages_nodemask+0x822/0xc90
>  [<ffffffff83169eb2>] ? pagevec_lru_move_fn+0x122/0x140
>  [<ffffffff831abeff>] alloc_pages_vma+0xaf/0x1c0
>  [<ffffffff8318a931>] handle_mm_fault+0xa31/0xc50
>  [<ffffffff831845c0>] ? follow_page_mask+0x1f0/0x320
>  [<ffffffff8318491b>] __get_user_pages+0x22b/0x660
>  [<ffffffff831b5093>] ? kmem_cache_alloc+0x183/0x210
>  [<ffffffff8318ce7e>] __mlock_vma_pages_range+0x9e/0xd0
>  [<ffffffff8318d6ba>] __mm_populate+0xca/0x180
>  [<ffffffff83179033>] vm_mmap_pgoff+0xd3/0xe0
>  [<ffffffff8318fbd6>] SyS_mmap_pgoff+0x116/0x2c0
>  [<ffffffff83011ced>] ? syscall_trace_enter+0x14d/0x2a0
>  [<ffffffff830084c2>] SyS_mmap+0x22/0x30
>  [<ffffffff837436ef>] tracesys+0xdd/0xe2
> 
> 
>  970         if (WARN_ON_ONCE((current->flags & (PF_MEMALLOC|PF_KSWAPD)) ==
>  971                         PF_MEMALLOC))

What were you running at the time? The XFS warning is there to
indicate that memory reclaim is doing something it shouldn't (i.e.
dirty page writeback from direct reclaim), so this is one for the mm
folk to work out...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: XFS WARN_ON in xfs_vm_writepage
  2014-06-13  6:26 ` Dave Chinner
@ 2014-06-13 14:19   ` Dave Jones
  2014-06-19  2:03     ` Dave Chinner
  0 siblings, 1 reply; 6+ messages in thread
From: Dave Jones @ 2014-06-13 14:19 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs, Linux Kernel, linux-mm

On Fri, Jun 13, 2014 at 04:26:45PM +1000, Dave Chinner wrote:

> >  970         if (WARN_ON_ONCE((current->flags & (PF_MEMALLOC|PF_KSWAPD)) ==
> >  971                         PF_MEMALLOC))
>
> What were you running at the time? The XFS warning is there to
> indicate that memory reclaim is doing something it shouldn't (i.e.
> dirty page writeback from direct reclaim), so this is one for the mm
> folk to work out...

Trinity had driven the machine deeply into swap, and the oom killer was
kicking in pretty often. Then this happened.

	Dave


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: XFS WARN_ON in xfs_vm_writepage
  2014-06-13 14:19   ` Dave Jones
@ 2014-06-19  2:03     ` Dave Chinner
  2014-06-23 20:27       ` Dave Jones
  0 siblings, 1 reply; 6+ messages in thread
From: Dave Chinner @ 2014-06-19  2:03 UTC (permalink / raw)
  To: Dave Jones, xfs, Linux Kernel, linux-mm

On Fri, Jun 13, 2014 at 10:19:25AM -0400, Dave Jones wrote:
> On Fri, Jun 13, 2014 at 04:26:45PM +1000, Dave Chinner wrote:
> 
> > >  970         if (WARN_ON_ONCE((current->flags & (PF_MEMALLOC|PF_KSWAPD)) ==
> > >  971                         PF_MEMALLOC))
> >
> > What were you running at the time? The XFS warning is there to
> > indicate that memory reclaim is doing something it shouldn't (i.e.
> > dirty page writeback from direct reclaim), so this is one for the mm
> > folk to work out...
> 
> Trinity had driven the machine deeply into swap, and the oom killer was
> kicking in pretty often. Then this happened.

Yup, sounds like a problem somewhere in mm/vmscan.c....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: XFS WARN_ON in xfs_vm_writepage
  2014-06-19  2:03     ` Dave Chinner
@ 2014-06-23 20:27       ` Dave Jones
  2014-06-24  1:02         ` Dave Chinner
  0 siblings, 1 reply; 6+ messages in thread
From: Dave Jones @ 2014-06-23 20:27 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs, Linux Kernel, linux-mm

On Thu, Jun 19, 2014 at 12:03:40PM +1000, Dave Chinner wrote:
 > On Fri, Jun 13, 2014 at 10:19:25AM -0400, Dave Jones wrote:
 > > On Fri, Jun 13, 2014 at 04:26:45PM +1000, Dave Chinner wrote:
 > > 
 > > > >  970         if (WARN_ON_ONCE((current->flags & (PF_MEMALLOC|PF_KSWAPD)) ==
 > > > >  971                         PF_MEMALLOC))
 > > >
 > > > What were you running at the time? The XFS warning is there to
 > > > indicate that memory reclaim is doing something it shouldn't (i.e.
 > > > dirty page writeback from direct reclaim), so this is one for the mm
 > > > folk to work out...
 > > 
 > > Trinity had driven the machine deeply into swap, and the oom killer was
 > > kicking in pretty often. Then this happened.
 > 
 > Yup, sounds like a problem somewhere in mm/vmscan.c....
 
I'm now hitting this fairly often, and no-one seems to have offered up
any suggestions yet, so I'm going to flail and guess randomly until someone
has a better idea what could be wrong.

That WARN commentary for the benefit of linux-mm readers..

 960         /*
 961          * Refuse to write the page out if we are called from reclaim context.
 962          *
 963          * This avoids stack overflows when called from deeply used stacks in
 964          * random callers for direct reclaim or memcg reclaim.  We explicitly
 965          * allow reclaim from kswapd as the stack usage there is relatively low.
 966          *
 967          * This should never happen except in the case of a VM regression so
 968          * warn about it.
 969          */
 970         if (WARN_ON_ONCE((current->flags & (PF_MEMALLOC|PF_KSWAPD)) ==
 971                         PF_MEMALLOC))
 972                 goto redirty;


Looking at this trace..

xfs_vm_writepage+0x5ce/0x630 [xfs]
? preempt_count_sub+0xab/0x100
? __percpu_counter_add+0x85/0xc0
shrink_page_list+0x8f9/0xb90
shrink_inactive_list+0x253/0x510
shrink_lruvec+0x563/0x6c0
shrink_zone+0x3b/0x100
shrink_zones+0x1f1/0x3c0
try_to_free_pages+0x164/0x380
__alloc_pages_nodemask+0x822/0xc90
alloc_pages_vma+0xaf/0x1c0
read_swap_cache_async+0x123/0x220
? final_putname+0x22/0x50
swapin_readahead+0x149/0x1d0
? find_get_entry+0xd5/0x130
? pagecache_get_page+0x30/0x210
? debug_smp_processor_id+0x17/0x20
handle_mm_fault+0x9d5/0xc50
__do_page_fault+0x1d2/0x640
? __acct_update_integrals+0x8b/0x120
? preempt_count_sub+0xab/0x100
do_page_fault+0x1e/0x70
page_fault+0x22/0x30

The reclaim here looks to be triggered from the readahead code.
Should something in that path be setting PF_KSWAPD in the gfp mask ?

	Dave


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: XFS WARN_ON in xfs_vm_writepage
  2014-06-23 20:27       ` Dave Jones
@ 2014-06-24  1:02         ` Dave Chinner
  0 siblings, 0 replies; 6+ messages in thread
From: Dave Chinner @ 2014-06-24  1:02 UTC (permalink / raw)
  To: Dave Jones, xfs, Linux Kernel, linux-mm

On Mon, Jun 23, 2014 at 04:27:14PM -0400, Dave Jones wrote:
> On Thu, Jun 19, 2014 at 12:03:40PM +1000, Dave Chinner wrote:
>  > On Fri, Jun 13, 2014 at 10:19:25AM -0400, Dave Jones wrote:
>  > > On Fri, Jun 13, 2014 at 04:26:45PM +1000, Dave Chinner wrote:
>  > > 
>  > > > >  970         if (WARN_ON_ONCE((current->flags & (PF_MEMALLOC|PF_KSWAPD)) ==
>  > > > >  971                         PF_MEMALLOC))
>  > > >
>  > > > What were you running at the time? The XFS warning is there to
>  > > > indicate that memory reclaim is doing something it shouldn't (i.e.
>  > > > dirty page writeback from direct reclaim), so this is one for the mm
>  > > > folk to work out...
>  > > 
>  > > Trinity had driven the machine deeply into swap, and the oom killer was
>  > > kicking in pretty often. Then this happened.
>  > 
>  > Yup, sounds like a problem somewhere in mm/vmscan.c....
>  
> I'm now hitting this fairly often, and no-one seems to have offered up
> any suggestions yet, so I'm going to flail and guess randomly until someone
> has a better idea what could be wrong.

You are not alone - I haven't been able to get anyone from the MM
side of things to comment on any of the bad behaviours we've had
reported recently.

> That WARN commentary for the benefit of linux-mm readers..
> 
>  960         /*
>  961          * Refuse to write the page out if we are called from reclaim context.
>  962          *
>  963          * This avoids stack overflows when called from deeply used stacks in
>  964          * random callers for direct reclaim or memcg reclaim.  We explicitly
>  965          * allow reclaim from kswapd as the stack usage there is relatively low.
>  966          *
>  967          * This should never happen except in the case of a VM regression so
>  968          * warn about it.
>  969          */
>  970         if (WARN_ON_ONCE((current->flags & (PF_MEMALLOC|PF_KSWAPD)) ==
>  971                         PF_MEMALLOC))
>  972                 goto redirty;
> 
> 
> Looking at this trace..
> 
> xfs_vm_writepage+0x5ce/0x630 [xfs]
> ? preempt_count_sub+0xab/0x100
> ? __percpu_counter_add+0x85/0xc0
> shrink_page_list+0x8f9/0xb90
> shrink_inactive_list+0x253/0x510
> shrink_lruvec+0x563/0x6c0
> shrink_zone+0x3b/0x100
> shrink_zones+0x1f1/0x3c0
> try_to_free_pages+0x164/0x380
> __alloc_pages_nodemask+0x822/0xc90
> alloc_pages_vma+0xaf/0x1c0
> read_swap_cache_async+0x123/0x220
> ? final_putname+0x22/0x50
> swapin_readahead+0x149/0x1d0
> ? find_get_entry+0xd5/0x130
> ? pagecache_get_page+0x30/0x210
> ? debug_smp_processor_id+0x17/0x20
> handle_mm_fault+0x9d5/0xc50
> __do_page_fault+0x1d2/0x640
> ? __acct_update_integrals+0x8b/0x120
> ? preempt_count_sub+0xab/0x100
> do_page_fault+0x1e/0x70
> page_fault+0x22/0x30
> 
> The reclaim here looks to be triggered from the readahead code.
> Should something in that path be setting PF_KSWAPD in the gfp mask ?

Definitely not. It's not kswapd that is doing the memory allocation
and we most certainly do not want direct reclaim to get a free pass
through reclaim congestion and backoff algorithms.

This could be another symptom of the other problems we've been
seeing which involve direct reclaim thottling way too hard (via the
too_many_isolated() loops) and getting stuck. This is a case of
direct reclaim finding dirty pages on the LRU, which should have
been handled by writeback threads or kswapd before direct reclaim
can find them. IOWs, direct reclaim is doing work when it probably
should have been throttled.

As the comment in the XFS code says: "This should never happen
except in the case of a VM regression..."

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2014-06-24  1:07 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-13  5:16 XFS WARN_ON in xfs_vm_writepage Dave Jones
2014-06-13  6:26 ` Dave Chinner
2014-06-13 14:19   ` Dave Jones
2014-06-19  2:03     ` Dave Chinner
2014-06-23 20:27       ` Dave Jones
2014-06-24  1:02         ` Dave Chinner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).