All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [Bug 192981] New: page allocation stalls
       [not found] <bug-192981-27@https.bugzilla.kernel.org/>
@ 2017-01-23 21:51 ` Andrew Morton
  2017-01-30 15:11   ` Alexander Polakov
                     ` (2 more replies)
  0 siblings, 3 replies; 27+ messages in thread
From: Andrew Morton @ 2017-01-23 21:51 UTC (permalink / raw)
  To: linux-mm; +Cc: bugzilla-daemon, apolyakov



(switched to email.  Please respond via emailed reply-to-all, not via the
bugzilla web interface).

A 2100 second page allocation stall!


On Fri, 20 Jan 2017 15:14:13 +0000 bugzilla-daemon@bugzilla.kernel.org wrote:

> https://bugzilla.kernel.org/show_bug.cgi?id=192981
> 
>             Bug ID: 192981
>            Summary: page allocation stalls
>            Product: Memory Management
>            Version: 2.5
>     Kernel Version: 4.9
>           Hardware: x86-64
>                 OS: Linux
>               Tree: Mainline
>             Status: NEW
>           Severity: normal
>           Priority: P1
>          Component: Page Allocator
>           Assignee: akpm@linux-foundation.org
>           Reporter: apolyakov@beget.ru
>         Regression: No
> 
> Created attachment 252621
>   --> https://bugzilla.kernel.org/attachment.cgi?id=252621&action=edit
> netconsole log with page allocation stalls
> 
> We have been experiencing page allocation stalls regularly on our machines used
> as backup servers (many disks, mostly running rsync and rm).
> 
> A notable one (2102516ms):
> 
> 2017-01-17T11:08:33.754562+03:00 storage8 [335170.452601] rsync: 
> 2017-01-17T11:08:33.754574+03:00 page allocation stalls for 2102516ms, order:0
> 2017-01-17T11:08:33.754825+03:00 storage8 ,
> mode:0x26040d0(GFP_TEMPORARY|__GFP_COMP|__GFP_NOTRACK)
> 2017-01-17T11:08:33.755094+03:00 storage8 [335170.452896] CPU: 2 PID: 20383
> Comm: rsync Tainted: G           O    4.9.0-0-beget-vanilla #1
> 2017-01-17T11:08:33.755337+03:00 storage8 [335170.453156] Hardware name:
> Supermicro X8DTL/X8DTL, BIOS 2.1b       11/16/2012
> 2017-01-17T11:08:33.755379+03:00 storage8 [335170.453414]  0000000000000000
> 2017-01-17T11:08:33.755379+03:00 storage8  ffffffff92441e0b
> 2017-01-17T11:08:33.755393+03:00 storage8  ffff8c4b0f30f430
> 2017-01-17T11:08:33.755484+03:00 storage8  0000000000000001
> 2017-01-17T11:08:33.755600+03:00 storage8  
> 2017-01-17T11:08:33.755603+03:00 storage8 [335170.453681]  ffffffff92c3d1f8
> 2017-01-17T11:08:33.755643+03:00 storage8  ffffffff92189594
> 2017-01-17T11:08:33.755648+03:00 storage8  ffff8c4b0f315088
> 2017-01-17T11:08:33.755754+03:00 storage8  026040d0026040d0
> 2017-01-17T11:08:33.755874+03:00 storage8  
> 2017-01-17T11:08:33.755874+03:00 storage8 [335170.453951]  0000000000000020
> 2017-01-17T11:08:33.755880+03:00 storage8  0000000000000010
> 2017-01-17T11:08:33.755913+03:00 storage8  ffffa449d15cb7f8
> 2017-01-17T11:08:33.756024+03:00 storage8  ffffa449d15cb7a8
> 2017-01-17T11:08:33.756163+03:00 storage8  
> 2017-01-17T11:08:33.756278+03:00 storage8 [335170.454216] Call Trace:
> 2017-01-17T11:08:33.756401+03:00 storage8 [335170.454347]  [<ffffffff92441e0b>]
> ? dump_stack+0x47/0x5c
> 2017-01-17T11:08:33.756537+03:00 storage8 [335170.454479]  [<ffffffff92189594>]
> ? warn_alloc+0x134/0x150
> 2017-01-17T11:08:33.756677+03:00 storage8 [335170.454612]  [<ffffffff92189f2d>]
> ? __alloc_pages_nodemask+0x90d/0xdb0
> 2017-01-17T11:08:33.756808+03:00 storage8 [335170.454753]  [<ffffffff923a5da4>]
> ? _xfs_trans_bjoin+0xe4/0x110
> 2017-01-17T11:08:33.756955+03:00 storage8 [335170.454892]  [<ffffffff921d76fa>]
> ? alloc_pages_current+0x9a/0x120
> 2017-01-17T11:08:33.757096+03:00 storage8 [335170.455027]  [<ffffffff921de440>]
> ? new_slab+0x550/0x5f0
> 2017-01-17T11:08:33.757218+03:00 storage8 [335170.455162]  [<ffffffff921df2eb>]
> ? ___slab_alloc.isra.59+0x3db/0x500
> 2017-01-17T11:08:33.757346+03:00 storage8 [335170.455297]  [<ffffffff923a683a>]
> ? xfs_trans_brelse+0x1aa/0x270
> 2017-01-17T11:08:33.757489+03:00 storage8 [335170.455427]  [<ffffffff921f4612>]
> ? memcg_kmem_get_cache+0x72/0x170
> 2017-01-17T11:08:33.757632+03:00 storage8 [335170.455567]  [<ffffffff92447837>]
> ? __radix_tree_create+0xa7/0x320
> 2017-01-17T11:08:33.757762+03:00 storage8 [335170.455706]  [<ffffffff92200be5>]
> ? __slab_alloc.isra.60+0xe/0x12
> 2017-01-17T11:08:33.757897+03:00 storage8 [335170.455842]  [<ffffffff921df5b0>]
> ? kmem_cache_alloc+0x1a0/0x1b0
> 2017-01-17T11:08:33.758034+03:00 storage8 [335170.455978]  [<ffffffff9221b746>]
> ? __d_alloc+0x26/0x1c0
> 2017-01-17T11:08:33.758160+03:00 storage8 [335170.456108]  [<ffffffff9221bb67>]
> ? d_alloc+0x17/0x80
> 2017-01-17T11:08:33.758290+03:00 storage8 [335170.456236]  [<ffffffff9221bdf1>]
> ? d_alloc_parallel+0x31/0x4b0
> 2017-01-17T11:08:33.758429+03:00 storage8 [335170.456370]  [<ffffffff9220f836>]
> ? lookup_fast+0x56/0x2f0
> 2017-01-17T11:08:33.758579+03:00 storage8 [335170.456505]  [<ffffffff923804de>]
> ? xfs_iunlock+0x17e/0x210
> 2017-01-17T11:08:33.758694+03:00 storage8 [335170.456635]  [<ffffffff922237f2>]
> ? legitimize_mnt+0x12/0x40
> 2017-01-17T11:08:33.758818+03:00 storage8 [335170.456770]  [<ffffffff9220d8f0>]
> ? lookup_slow+0x80/0x170
> 2017-01-17T11:08:33.758955+03:00 storage8 [335170.456897]  [<ffffffff9220fd08>]
> ? walk_component+0x1a8/0x270
> 2017-01-17T11:08:33.759086+03:00 storage8 [335170.457024]  [<ffffffff92210398>]
> ? path_lookupat+0x48/0xf0
> 2017-01-17T11:08:33.759200+03:00 storage8 [335170.457153]  [<ffffffff92212e75>]
> ? filename_lookup+0xb5/0x190
> 2017-01-17T11:08:33.759336+03:00 storage8 [335170.457282]  [<ffffffff92474992>]
> ? strncpy_from_user+0x42/0x140
> 2017-01-17T11:08:33.759462+03:00 storage8 [335170.457411]  [<ffffffff9221305b>]
> ? getname_flags+0x7b/0x200
> 2017-01-17T11:08:33.759593+03:00 storage8 [335170.457538]  [<ffffffff92208b66>]
> ? vfs_fstatat+0x46/0x90
> 2017-01-17T11:08:33.759719+03:00 storage8 [335170.457666]  [<ffffffff9221aaf4>]
> ? dput+0x34/0x200
> 2017-01-17T11:08:33.759859+03:00 storage8 [335170.457793]  [<ffffffff92208d47>]
> ? SyS_newlstat+0x17/0x30
> 2017-01-17T11:08:33.759977+03:00 storage8 [335170.457923]  [<ffffffff9282b7a0>]
> ? entry_SYSCALL_64_fastpath+0x13/0x94
> 
> 
> # sysctl vm
> vm.admin_reserve_kbytes = 1048576
> vm.block_dump = 0
> error: permission denied on key 'vm.compact_memory'
> vm.compact_unevictable_allowed = 1
> vm.dirty_background_bytes = 0
> vm.dirty_background_ratio = 1
> vm.dirty_bytes = 0
> vm.dirty_expire_centisecs = 1000
> vm.dirty_ratio = 1
> vm.dirty_writeback_centisecs = 100
> vm.dirtytime_expire_seconds = 43200
> vm.drop_caches = 0
> vm.extfrag_threshold = 500
> vm.hugepages_treat_as_movable = 0
> vm.hugetlb_shm_group = 0
> vm.laptop_mode = 0
> vm.legacy_va_layout = 0
> vm.lowmem_reserve_ratio = 128   128     16
> vm.max_map_count = 65530
> vm.memory_failure_early_kill = 0
> vm.memory_failure_recovery = 1
> vm.min_free_kbytes = 4194304
> vm.min_slab_ratio = 1
> vm.min_unmapped_ratio = 1
> vm.mmap_min_addr = 65536
> vm.mmap_rnd_bits = 28
> vm.nr_hugepages = 0
> vm.nr_hugepages_mempolicy = 0
> vm.nr_overcommit_hugepages = 0
> vm.nr_pdflush_threads = 0
> vm.numa_zonelist_order = default
> vm.oom_dump_tasks = 1
> vm.oom_kill_allocating_task = 0
> vm.overcommit_kbytes = 0
> vm.overcommit_memory = 1
> vm.overcommit_ratio = 50
> vm.page-cluster = 3
> vm.panic_on_oom = 0
> vm.percpu_pagelist_fraction = 0
> vm.stat_interval = 1
> error: permission denied on key 'vm.stat_refresh'
> vm.swappiness = 0
> vm.user_reserve_kbytes = 131072
> vm.vfs_cache_pressure = 200
> vm.watermark_scale_factor = 100
> vm.zone_reclaim_mode = 7
> 
> # free -m
>              total       used       free     shared    buffers     cached
> Mem:         96646      46330      50315          0      25529       1228
> -/+ buffers/cache:      19572      77073
> Swap:            0          0          0
> 
> (Yes, we don't have any swap configured)
> 
> On this particular one we have 56 disks, using both XFS and EXT4. I attach full
> netconsole log for 18/01/2017. You can see there that we run drop_caches
> regularly as it helps to keep the machine alive.
> 
> Since then we've also tried mounting filesystems with sync flag to avoid
> writeback and it seems to have helped a lot (full day without stalls).
> 
> -- 
> You are receiving this mail because:
> You are the assignee for the bug.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
  2017-01-23 21:51 ` [Bug 192981] New: page allocation stalls Andrew Morton
@ 2017-01-30 15:11   ` Alexander Polakov
  2017-02-01 15:27   ` Michal Hocko
  2017-02-15 12:56     ` Alexander Polakov
  2 siblings, 0 replies; 27+ messages in thread
From: Alexander Polakov @ 2017-01-30 15:11 UTC (permalink / raw)
  To: Andrew Morton, linux-mm; +Cc: bugzilla-daemon

On 01/24/2017 12:51 AM, Andrew Morton wrote:

Tetsuo Handa suggested applying kmallocwd patch 
(http://lkml.kernel.org/r/1478416501-10104-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp)
and sending netconsole logs to this list.

I uploaded them here: http://aplkv.beget.tech/lkml/

>
>
> (switched to email.  Please respond via emailed reply-to-all, not via the
> bugzilla web interface).
>
> A 2100 second page allocation stall!
>
>
> On Fri, 20 Jan 2017 15:14:13 +0000 bugzilla-daemon@bugzilla.kernel.org wrote:
>
>> https://bugzilla.kernel.org/show_bug.cgi?id=192981
>>
>>             Bug ID: 192981
>>            Summary: page allocation stalls
>>            Product: Memory Management
>>            Version: 2.5
>>     Kernel Version: 4.9
>>           Hardware: x86-64
>>                 OS: Linux
>>               Tree: Mainline
>>             Status: NEW
>>           Severity: normal
>>           Priority: P1
>>          Component: Page Allocator
>>           Assignee: akpm@linux-foundation.org
>>           Reporter: apolyakov@beget.ru
>>         Regression: No
>>
>> Created attachment 252621
>>   --> https://bugzilla.kernel.org/attachment.cgi?id=252621&action=edit
>> netconsole log with page allocation stalls
>>
>> We have been experiencing page allocation stalls regularly on our machines used
>> as backup servers (many disks, mostly running rsync and rm).
>>
>> A notable one (2102516ms):
>>
>> 2017-01-17T11:08:33.754562+03:00 storage8 [335170.452601] rsync:
>> 2017-01-17T11:08:33.754574+03:00 page allocation stalls for 2102516ms, order:0
>> 2017-01-17T11:08:33.754825+03:00 storage8 ,
>> mode:0x26040d0(GFP_TEMPORARY|__GFP_COMP|__GFP_NOTRACK)
>> 2017-01-17T11:08:33.755094+03:00 storage8 [335170.452896] CPU: 2 PID: 20383
>> Comm: rsync Tainted: G           O    4.9.0-0-beget-vanilla #1
>> 2017-01-17T11:08:33.755337+03:00 storage8 [335170.453156] Hardware name:
>> Supermicro X8DTL/X8DTL, BIOS 2.1b       11/16/2012
>> 2017-01-17T11:08:33.755379+03:00 storage8 [335170.453414]  0000000000000000
>> 2017-01-17T11:08:33.755379+03:00 storage8  ffffffff92441e0b
>> 2017-01-17T11:08:33.755393+03:00 storage8  ffff8c4b0f30f430
>> 2017-01-17T11:08:33.755484+03:00 storage8  0000000000000001
>> 2017-01-17T11:08:33.755600+03:00 storage8
>> 2017-01-17T11:08:33.755603+03:00 storage8 [335170.453681]  ffffffff92c3d1f8
>> 2017-01-17T11:08:33.755643+03:00 storage8  ffffffff92189594
>> 2017-01-17T11:08:33.755648+03:00 storage8  ffff8c4b0f315088
>> 2017-01-17T11:08:33.755754+03:00 storage8  026040d0026040d0
>> 2017-01-17T11:08:33.755874+03:00 storage8
>> 2017-01-17T11:08:33.755874+03:00 storage8 [335170.453951]  0000000000000020
>> 2017-01-17T11:08:33.755880+03:00 storage8  0000000000000010
>> 2017-01-17T11:08:33.755913+03:00 storage8  ffffa449d15cb7f8
>> 2017-01-17T11:08:33.756024+03:00 storage8  ffffa449d15cb7a8
>> 2017-01-17T11:08:33.756163+03:00 storage8
>> 2017-01-17T11:08:33.756278+03:00 storage8 [335170.454216] Call Trace:
>> 2017-01-17T11:08:33.756401+03:00 storage8 [335170.454347]  [<ffffffff92441e0b>]
>> ? dump_stack+0x47/0x5c
>> 2017-01-17T11:08:33.756537+03:00 storage8 [335170.454479]  [<ffffffff92189594>]
>> ? warn_alloc+0x134/0x150
>> 2017-01-17T11:08:33.756677+03:00 storage8 [335170.454612]  [<ffffffff92189f2d>]
>> ? __alloc_pages_nodemask+0x90d/0xdb0
>> 2017-01-17T11:08:33.756808+03:00 storage8 [335170.454753]  [<ffffffff923a5da4>]
>> ? _xfs_trans_bjoin+0xe4/0x110
>> 2017-01-17T11:08:33.756955+03:00 storage8 [335170.454892]  [<ffffffff921d76fa>]
>> ? alloc_pages_current+0x9a/0x120
>> 2017-01-17T11:08:33.757096+03:00 storage8 [335170.455027]  [<ffffffff921de440>]
>> ? new_slab+0x550/0x5f0
>> 2017-01-17T11:08:33.757218+03:00 storage8 [335170.455162]  [<ffffffff921df2eb>]
>> ? ___slab_alloc.isra.59+0x3db/0x500
>> 2017-01-17T11:08:33.757346+03:00 storage8 [335170.455297]  [<ffffffff923a683a>]
>> ? xfs_trans_brelse+0x1aa/0x270
>> 2017-01-17T11:08:33.757489+03:00 storage8 [335170.455427]  [<ffffffff921f4612>]
>> ? memcg_kmem_get_cache+0x72/0x170
>> 2017-01-17T11:08:33.757632+03:00 storage8 [335170.455567]  [<ffffffff92447837>]
>> ? __radix_tree_create+0xa7/0x320
>> 2017-01-17T11:08:33.757762+03:00 storage8 [335170.455706]  [<ffffffff92200be5>]
>> ? __slab_alloc.isra.60+0xe/0x12
>> 2017-01-17T11:08:33.757897+03:00 storage8 [335170.455842]  [<ffffffff921df5b0>]
>> ? kmem_cache_alloc+0x1a0/0x1b0
>> 2017-01-17T11:08:33.758034+03:00 storage8 [335170.455978]  [<ffffffff9221b746>]
>> ? __d_alloc+0x26/0x1c0
>> 2017-01-17T11:08:33.758160+03:00 storage8 [335170.456108]  [<ffffffff9221bb67>]
>> ? d_alloc+0x17/0x80
>> 2017-01-17T11:08:33.758290+03:00 storage8 [335170.456236]  [<ffffffff9221bdf1>]
>> ? d_alloc_parallel+0x31/0x4b0
>> 2017-01-17T11:08:33.758429+03:00 storage8 [335170.456370]  [<ffffffff9220f836>]
>> ? lookup_fast+0x56/0x2f0
>> 2017-01-17T11:08:33.758579+03:00 storage8 [335170.456505]  [<ffffffff923804de>]
>> ? xfs_iunlock+0x17e/0x210
>> 2017-01-17T11:08:33.758694+03:00 storage8 [335170.456635]  [<ffffffff922237f2>]
>> ? legitimize_mnt+0x12/0x40
>> 2017-01-17T11:08:33.758818+03:00 storage8 [335170.456770]  [<ffffffff9220d8f0>]
>> ? lookup_slow+0x80/0x170
>> 2017-01-17T11:08:33.758955+03:00 storage8 [335170.456897]  [<ffffffff9220fd08>]
>> ? walk_component+0x1a8/0x270
>> 2017-01-17T11:08:33.759086+03:00 storage8 [335170.457024]  [<ffffffff92210398>]
>> ? path_lookupat+0x48/0xf0
>> 2017-01-17T11:08:33.759200+03:00 storage8 [335170.457153]  [<ffffffff92212e75>]
>> ? filename_lookup+0xb5/0x190
>> 2017-01-17T11:08:33.759336+03:00 storage8 [335170.457282]  [<ffffffff92474992>]
>> ? strncpy_from_user+0x42/0x140
>> 2017-01-17T11:08:33.759462+03:00 storage8 [335170.457411]  [<ffffffff9221305b>]
>> ? getname_flags+0x7b/0x200
>> 2017-01-17T11:08:33.759593+03:00 storage8 [335170.457538]  [<ffffffff92208b66>]
>> ? vfs_fstatat+0x46/0x90
>> 2017-01-17T11:08:33.759719+03:00 storage8 [335170.457666]  [<ffffffff9221aaf4>]
>> ? dput+0x34/0x200
>> 2017-01-17T11:08:33.759859+03:00 storage8 [335170.457793]  [<ffffffff92208d47>]
>> ? SyS_newlstat+0x17/0x30
>> 2017-01-17T11:08:33.759977+03:00 storage8 [335170.457923]  [<ffffffff9282b7a0>]
>> ? entry_SYSCALL_64_fastpath+0x13/0x94
>>
>>
>> # sysctl vm
>> vm.admin_reserve_kbytes = 1048576
>> vm.block_dump = 0
>> error: permission denied on key 'vm.compact_memory'
>> vm.compact_unevictable_allowed = 1
>> vm.dirty_background_bytes = 0
>> vm.dirty_background_ratio = 1
>> vm.dirty_bytes = 0
>> vm.dirty_expire_centisecs = 1000
>> vm.dirty_ratio = 1
>> vm.dirty_writeback_centisecs = 100
>> vm.dirtytime_expire_seconds = 43200
>> vm.drop_caches = 0
>> vm.extfrag_threshold = 500
>> vm.hugepages_treat_as_movable = 0
>> vm.hugetlb_shm_group = 0
>> vm.laptop_mode = 0
>> vm.legacy_va_layout = 0
>> vm.lowmem_reserve_ratio = 128   128     16
>> vm.max_map_count = 65530
>> vm.memory_failure_early_kill = 0
>> vm.memory_failure_recovery = 1
>> vm.min_free_kbytes = 4194304
>> vm.min_slab_ratio = 1
>> vm.min_unmapped_ratio = 1
>> vm.mmap_min_addr = 65536
>> vm.mmap_rnd_bits = 28
>> vm.nr_hugepages = 0
>> vm.nr_hugepages_mempolicy = 0
>> vm.nr_overcommit_hugepages = 0
>> vm.nr_pdflush_threads = 0
>> vm.numa_zonelist_order = default
>> vm.oom_dump_tasks = 1
>> vm.oom_kill_allocating_task = 0
>> vm.overcommit_kbytes = 0
>> vm.overcommit_memory = 1
>> vm.overcommit_ratio = 50
>> vm.page-cluster = 3
>> vm.panic_on_oom = 0
>> vm.percpu_pagelist_fraction = 0
>> vm.stat_interval = 1
>> error: permission denied on key 'vm.stat_refresh'
>> vm.swappiness = 0
>> vm.user_reserve_kbytes = 131072
>> vm.vfs_cache_pressure = 200
>> vm.watermark_scale_factor = 100
>> vm.zone_reclaim_mode = 7
>>
>> # free -m
>>              total       used       free     shared    buffers     cached
>> Mem:         96646      46330      50315          0      25529       1228
>> -/+ buffers/cache:      19572      77073
>> Swap:            0          0          0
>>
>> (Yes, we don't have any swap configured)
>>
>> On this particular one we have 56 disks, using both XFS and EXT4. I attach full
>> netconsole log for 18/01/2017. You can see there that we run drop_caches
>> regularly as it helps to keep the machine alive.
>>
>> Since then we've also tried mounting filesystems with sync flag to avoid
>> writeback and it seems to have helped a lot (full day without stalls).
>>
>> --
>> You are receiving this mail because:
>> You are the assignee for the bug.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
  2017-01-23 21:51 ` [Bug 192981] New: page allocation stalls Andrew Morton
  2017-01-30 15:11   ` Alexander Polakov
@ 2017-02-01 15:27   ` Michal Hocko
  2017-02-15 12:56     ` Alexander Polakov
  2 siblings, 0 replies; 27+ messages in thread
From: Michal Hocko @ 2017-02-01 15:27 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-mm, bugzilla-daemon, apolyakov

[Sorry for a late reply]

On Mon 23-01-17 13:51:11, Andrew Morton wrote:
[...]
> > We have been experiencing page allocation stalls regularly on our machines used
> > as backup servers (many disks, mostly running rsync and rm).
> > 
> > A notable one (2102516ms):
> > 
> > 2017-01-17T11:08:33.754562+03:00 storage8 [335170.452601] rsync: 
> > 2017-01-17T11:08:33.754574+03:00 page allocation stalls for 2102516ms, order:0
> > 2017-01-17T11:08:33.754825+03:00 storage8 ,
> > mode:0x26040d0(GFP_TEMPORARY|__GFP_COMP|__GFP_NOTRACK)

I have checked the log https://bugzilla.kernel.org/attachment.cgi?id=252621
and there are more problems than just this allocation stall.
2017-01-18T02:01:17.613811+03:00 storage8 [24768.850743] INFO: task mcpu:8505 blocked for more than 30 seconds.
2017-01-18T02:01:17.619823+03:00 storage8 [24768.856784] INFO: task mcpu:8506 blocked for more than 30 seconds.
2017-01-18T02:01:17.623022+03:00 storage8 [24768.859983] INFO: task mcpu:8507 blocked for more than 30 seconds.
2017-01-18T02:01:17.626262+03:00 storage8 [24768.863212] INFO: task mcpu:8508 blocked for more than 30 seconds.
2017-01-18T04:24:39.204636+03:00 storage8 [33370.536543] INFO: task atop:18760 blocked for more than 30 seconds.
2017-01-18T04:25:09.924667+03:00 storage8 [33401.256617] INFO: task kswapd0:136 blocked for more than 30 seconds.
2017-01-18T04:25:09.928503+03:00 storage8 [33401.260427] INFO: task kswapd1:137 blocked for more than 30 seconds.
2017-01-18T04:25:09.935268+03:00 storage8 [33401.267236] INFO: task atop:18760 blocked for more than 30 seconds.
2017-01-18T04:25:09.938539+03:00 storage8 [33401.270503] INFO: task rsync:29177 blocked for more than 30 seconds.
2017-01-18T11:40:44.709603+03:00 storage8 [  743.887448] INFO: task rsync:18111 blocked for more than 30 seconds.
2017-01-18T11:40:44.720724+03:00 storage8 [  743.898581] INFO: task rsync:19378 blocked for more than 30 seconds.
2017-01-18T11:42:47.589968+03:00 storage8 [  866.767027] INFO: task kswapd1:139 blocked for more than 30 seconds.
2017-01-18T11:42:47.594015+03:00 storage8 [  866.771109] INFO: task rsync:6909 blocked for more than 30 seconds.
2017-01-18T11:42:47.604005+03:00 storage8 [  866.781098] INFO: task rsync:17582 blocked for more than 30 seconds.
2017-01-18T11:42:47.611063+03:00 storage8 [  866.788159] INFO: task rsync:18111 blocked for more than 30 seconds.
2017-01-18T11:42:47.619089+03:00 storage8 [  866.796183] INFO: task rsync:18776 blocked for more than 30 seconds.
2017-01-18T11:42:47.631557+03:00 storage8 [  866.808652] INFO: task rsync:18777 blocked for more than 30 seconds.
2017-01-18T11:42:47.641018+03:00 storage8 [  866.818099] INFO: task rsync:19281 blocked for more than 30 seconds.
2017-01-18T11:42:47.647701+03:00 storage8 [  866.824797] INFO: task rsync:19740 blocked for more than 30 seconds.

Most of them are waiting for mmap_sem for read but there are cases where
the direct reclaim is waiting for a FS lock (in xfs_reclaim_inodes_ag).
I do not see the 2102516ms stall in the attached log and the information
given here doesn't contain the memory counters. When checking few random
stalls the picture seems to be pretty much consistent

[slightly edited to fix the broken new lines in the output]

2017-01-18T13:28:41.783833+03:00 page allocation stalls for 10380ms, order:0, mode:0x2604050(GFP_NOFS|__GFP_COMP|__GFP_RECLAIMABLE|__GFP_NOTRACK)
[...]
7221.937932]  free:1142851 free_pcp:20 free_cma:2942
2017-01-18T13:28:42.743176+03:00 storage8 [ 7221.938674] Node 0 active_anon:3352112kB inactive_anon:2120kB active_file:87308kB inactive_file:85252kB unevictable:3532kB isolated(anon):0kB iso
lated(file):2852kB mapped:5492kB dirty:0kB writeback:0kB shmem:0kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 2128kB writeback_tmp:0kB unstable:0kB pages_scanned:592392 all_unreclaimable?
 yes
2017-01-18T13:28:42.743780+03:00 storage8 [ 7221.939273] Node 1 active_anon:3472092kB inactive_anon:292kB active_file:73592kB inactive_file:72548kB unevictable:61700kB isolated(anon):0kB isolated(file):6640kB mapped:3268kB dirty:0kB writeback:0kB shmem:0kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 292kB writeback_tmp:0kB unstable:0kB pages_scanned:1064 all_unreclaimable? no
[...]
2017-01-18T13:28:42.744551+03:00 storage8 [ 7221.940642] Node 0 DMA32 free: 491744kB min:129476kB low:161844kB high:194212kB active_anon:59952kB inactive_anon:0kB active_file:148kB inactive_file:108kB unevictable:0kB writepending:0kB present:3120640kB managed:3055072kB mlocked:0kB slab_reclaimable:2317296kB slab_unreclaimable:96572kB kernel_stack:32kB pagetables:136kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
2017-01-18T13:28:42.892456+03:00 storage8 [ 7222.088527] lowmem_reserve[]: 90576 90576 0 0
2017-01-18T13:28:42.892592+03:00 storage8 [ 7222.088696] Node 0 Normal free: 1965392kB min:1965456kB low:2456820kB high:2948184kB active_anon:3292160kB inactive_anon:2120kB active_file:87560kB inactive_file:85192kB unevictable:3532kB writepending:0kB present:47185920kB managed:46375368kB mlocked:3532kB slab_reclaimable:24725412kB slab_unreclaimable:2591604kB kernel_stack:15944kB pagetables:98056kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
2017-01-18T13:28:42.893512+03:00 storage8 [ 7222.089602] Node 1 Normal free: 2099044kB min:2098692kB low:2623364kB high:3148036kB active_anon:3472092kB inactive_anon:292kB active_file:71624kB inactive_file:71456kB unevictable:61700kB writepending:0kB present:50331648kB managed:49519168kB mlocked:61700kB slab_reclaimable:28674160kB slab_unreclaimable:2597592kB kernel_stack:8472kB pagetables:51836kB bounce:0kB free_pcp:296kB local_pcp:0kB free_cma:11768kB

All the eligible zones are low on memoyr (DMA32 with the lowmem
protection). Anon memory is not reclaimable because you do not have
any swap. The file LRU seems to contain quite some memory. Dirty
and writeback counters would suggest that the page cache should be
reclaimable but maybe something is pinning those pages. I suspect the
reclaim blocked waiting for IO for metadata. I am not an expert on the
fs side of this but this smells like the IO cannot keep pace with the
load.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
  2017-01-23 21:51 ` [Bug 192981] New: page allocation stalls Andrew Morton
@ 2017-02-15 12:56     ` Alexander Polakov
  2017-02-01 15:27   ` Michal Hocko
  2017-02-15 12:56     ` Alexander Polakov
  2 siblings, 0 replies; 27+ messages in thread
From: Alexander Polakov @ 2017-02-15 12:56 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-mm, linux-xfs, bugzilla-daemon

On 01/24/2017 12:51 AM, Andrew Morton wrote:
>
>
> (switched to email.  Please respond via emailed reply-to-all, not via the
> bugzilla web interface).
>
> A 2100 second page allocation stall!
>

I think we finally figured out the problem using Tetsuo Handa's mallocwd 
patch. It seems like it is in XFS direct reclaim path.

Here's how it goes:

memory is low, rsync goes into direct reclaim, locking xfs mutex in 
xfs_reclaim_inodes_nr():

2017-02-14T00:12:59.811447+03:00 storage9 [24646.497290] MemAlloc: 
rsync(19706) flags=0x404840 switches=8692 seq=340 
gfp=0x27080c0(GFP_KERNEL_ACCOUNT|__GFP_ZERO|__GFP_NOTRACK) order=0 
delay=6795
2017-02-14T00:12:59.811447+03:00 storage9 [24646.497550] rsync           R
2017-02-14T00:12:59.811579+03:00 storage9     0 19706   5000 0x00000000
2017-02-14T00:12:59.811579+03:00 storage9 [24646.497690]  ffffa4361dc36c00
2017-02-14T00:12:59.811591+03:00 storage9  0000000c1dc36c00
2017-02-14T00:12:59.811591+03:00 storage9  ffffa4361dc36c00
2017-02-14T00:12:59.811724+03:00 storage9  ffffa44347792580
2017-02-14T00:12:59.811841+03:00 storage9
2017-02-14T00:12:59.811841+03:00 storage9 [24646.497951]  0000000000000000
2017-02-14T00:12:59.811846+03:00 storage9  ffffb1cab6343458
2017-02-14T00:12:59.811885+03:00 storage9  ffffffffb383e799
2017-02-14T00:12:59.811987+03:00 storage9  0000000000000000
2017-02-14T00:12:59.812103+03:00 storage9
2017-02-14T00:12:59.812103+03:00 storage9 [24646.498208]  ffffa443ffff7a00
2017-02-14T00:12:59.812103+03:00 storage9  0000000000000001
2017-02-14T00:12:59.812104+03:00 storage9  ffffb1cab6343448
2017-02-14T00:12:59.812233+03:00 storage9  0000000000000002
2017-02-14T00:12:59.812350+03:00 storage9
2017-02-14T00:12:59.812475+03:00 storage9 [24646.498462] Call Trace:
2017-02-14T00:12:59.812610+03:00 storage9 [24646.498587] 
[<ffffffffb383e799>] ? __schedule+0x179/0x5c8
2017-02-14T00:12:59.812733+03:00 storage9 [24646.498718] 
[<ffffffffb383ecc2>] ? schedule+0x32/0x80
2017-02-14T00:12:59.812869+03:00 storage9 [24646.498846] 
[<ffffffffb3841229>] ? schedule_timeout+0x159/0x2a0
2017-02-14T00:12:59.812997+03:00 storage9 [24646.498977] 
[<ffffffffb30f1450>] ? add_timer_on+0x130/0x130
2017-02-14T00:12:59.813130+03:00 storage9 [24646.499108] 
[<ffffffffb318ff13>] ? __alloc_pages_nodemask+0xe73/0x16b0
2017-02-14T00:12:59.813263+03:00 storage9 [24646.499240] 
[<ffffffffb31deb2a>] ? alloc_pages_current+0x9a/0x120
2017-02-14T00:12:59.813388+03:00 storage9 [24646.499371] 
[<ffffffffb33d1a51>] ? xfs_buf_allocate_memory+0x171/0x2c0
2017-02-14T00:12:59.813564+03:00 storage9 [24646.499503] 
[<ffffffffb337c48b>] ? xfs_buf_get_map+0x18b/0x1d0
2017-02-14T00:12:59.813654+03:00 storage9 [24646.499634] 
[<ffffffffb337cdcb>] ? xfs_buf_read_map+0x3b/0x160
2017-02-14T00:12:59.813783+03:00 storage9 [24646.499765] 
[<ffffffffb33b71b0>] ? xfs_trans_read_buf_map+0x1f0/0x490
2017-02-14T00:12:59.813931+03:00 storage9 [24646.499897] 
[<ffffffffb3362859>] ? xfs_imap_to_bp+0x79/0x120
2017-02-14T00:12:59.814056+03:00 storage9 [24646.500029] 
[<ffffffffb3394a38>] ? xfs_iflush+0x118/0x380
2017-02-14T00:12:59.814196+03:00 storage9 [24646.500158] 
[<ffffffffb30d6130>] ? wake_atomic_t_function+0x40/0x40
2017-02-14T00:12:59.814314+03:00 storage9 [24646.500289] 
[<ffffffffb3385af4>] ? xfs_reclaim_inode+0x274/0x3f0
2017-02-14T00:12:59.814444+03:00 storage9 [24646.500421] 
[<ffffffffb3385e29>] ? xfs_reclaim_inodes_ag+0x1b9/0x2c0
2017-02-14T00:12:59.814573+03:00 storage9 [24646.500553] 
[<ffffffffb345c6b8>] ? radix_tree_next_chunk+0x108/0x2a0
2017-02-14T00:12:59.814701+03:00 storage9 [24646.500685] 
[<ffffffffb30f0e31>] ? lock_timer_base+0x51/0x70
2017-02-14T00:12:59.814846+03:00 storage9 [24646.500814] 
[<ffffffffb345ca4e>] ? radix_tree_gang_lookup_tag+0xae/0x180
2017-02-14T00:12:59.814966+03:00 storage9 [24646.500946] 
[<ffffffffb336f588>] ? xfs_perag_get_tag+0x48/0x100
2017-02-14T00:12:59.815104+03:00 storage9 [24646.501079] 
[<ffffffffb31b63c1>] ? __list_lru_walk_one.isra.7+0x31/0x120
2017-02-14T00:12:59.815231+03:00 storage9 [24646.501212] 
[<ffffffffb322aed0>] ? iget5_locked+0x240/0x240
2017-02-14T00:12:59.815360+03:00 storage9 [24646.501342] 
[<ffffffffb33873c1>] ? xfs_reclaim_inodes_nr+0x31/0x40
2017-02-14T00:12:59.815489+03:00 storage9 [24646.501472] 
[<ffffffffb3213160>] ? super_cache_scan+0x1a0/0x1b0
2017-02-14T00:12:59.815629+03:00 storage9 [24646.501603] 
[<ffffffffb319bf62>] ? shrink_slab+0x262/0x440
2017-02-14T00:12:59.815760+03:00 storage9 [24646.501734] 
[<ffffffffb319cc8b>] ? drop_slab_node+0x2b/0x60
2017-02-14T00:12:59.815891+03:00 storage9 [24646.501864] 
[<ffffffffb319cd02>] ? drop_slab+0x42/0x70
2017-02-14T00:12:59.816020+03:00 storage9 [24646.501994] 
[<ffffffffb318abc0>] ? out_of_memory+0x220/0x560
2017-02-14T00:12:59.816160+03:00 storage9 [24646.502122] 
[<ffffffffb31903b2>] ? __alloc_pages_nodemask+0x1312/0x16b0
2017-02-14T00:12:59.816285+03:00 storage9 [24646.502255] 
[<ffffffffb31deb2a>] ? alloc_pages_current+0x9a/0x120
2017-02-14T00:12:59.816407+03:00 storage9 [24646.502386] 
[<ffffffffb304b803>] ? pte_alloc_one+0x13/0x40
2017-02-14T00:12:59.816536+03:00 storage9 [24646.502517] 
[<ffffffffb31be21f>] ? handle_mm_fault+0xc7f/0x14b0
2017-02-14T00:12:59.816659+03:00 storage9 [24646.502648] 
[<ffffffffb30461ff>] ? __do_page_fault+0x1cf/0x5a0
2017-02-14T00:12:59.816851+03:00 storage9 [24646.502777] 
[<ffffffffb3843362>] ? page_fault+0x22/0x30

But it cannot get memory, because it's low (?). So it stays blocked.

Other processes do the same but they can't get past the mutex in 
xfs_reclaim_inodes_nr():

2017-02-14T00:12:59.817057+03:00 storage9 [24646.502909] MemAlloc: 
rsync(19707) flags=0x404840 switches=6344 seq=638 
gfp=0x24201ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD) order=0 delay=3257 
uninterruptible
2017-02-14T00:12:59.817062+03:00 storage9 [24646.503167] rsync           D
2017-02-14T00:12:59.817189+03:00 storage9     0 19707   5000 0x00000000
2017-02-14T00:12:59.817197+03:00 storage9 [24646.503299]  0000000000000000
2017-02-14T00:12:59.817197+03:00 storage9  ffffa44347793840
2017-02-14T00:12:59.817207+03:00 storage9  ffffa4277a36ec00
2017-02-14T00:12:59.817316+03:00 storage9  ffffa4361dc35100
2017-02-14T00:12:59.817446+03:00 storage9
2017-02-14T00:12:59.817446+03:00 storage9 [24646.503554]  ffffa4437a514300
2017-02-14T00:12:59.817461+03:00 storage9  ffffb1cab0823638
2017-02-14T00:12:59.817461+03:00 storage9  ffffffffb383e799
2017-02-14T00:12:59.817598+03:00 storage9  ffffa44365967b00
2017-02-14T00:12:59.817699+03:00 storage9
2017-02-14T00:12:59.817699+03:00 storage9 [24646.503809]  ffffb1cab0823680
2017-02-14T00:12:59.817717+03:00 storage9  0000000000000001
2017-02-14T00:12:59.817717+03:00 storage9  ffffa443424062a8
2017-02-14T00:12:59.817832+03:00 storage9  0000000000000000
2017-02-14T00:12:59.817948+03:00 storage9
2017-02-14T00:12:59.818078+03:00 storage9 [24646.504061] Call Trace:
2017-02-14T00:12:59.818211+03:00 storage9 [24646.504189] 
[<ffffffffb383e799>] ? __schedule+0x179/0x5c8
2017-02-14T00:12:59.818339+03:00 storage9 [24646.504321] 
[<ffffffffb383ecc2>] ? schedule+0x32/0x80
2017-02-14T00:12:59.818471+03:00 storage9 [24646.504451] 
[<ffffffffb383ee6e>] ? schedule_preempt_disabled+0xe/0x20
2017-02-14T00:12:59.818611+03:00 storage9 [24646.504582] 
[<ffffffffb38400aa>] ? __mutex_lock_slowpath+0x8a/0x100
2017-02-14T00:12:59.818737+03:00 storage9 [24646.504715] 
[<ffffffffb3840133>] ? mutex_lock+0x13/0x22
2017-02-14T00:12:59.818870+03:00 storage9 [24646.504846] 
[<ffffffffb3385e98>] ? xfs_reclaim_inodes_ag+0x228/0x2c0
2017-02-14T00:12:59.819024+03:00 storage9 [24646.504977] 
[<ffffffffb345c6b8>] ? radix_tree_next_chunk+0x108/0x2a0
2017-02-14T00:12:59.819124+03:00 storage9 [24646.505107] 
[<ffffffffb345ca4e>] ? radix_tree_gang_lookup_tag+0xae/0x180
2017-02-14T00:12:59.819273+03:00 storage9 [24646.505238] 
[<ffffffffb336f588>] ? xfs_perag_get_tag+0x48/0x100
2017-02-14T00:12:59.819397+03:00 storage9 [24646.505370] 
[<ffffffffb31b63c1>] ? __list_lru_walk_one.isra.7+0x31/0x120
2017-02-14T00:12:59.819528+03:00 storage9 [24646.505502] 
[<ffffffffb322aed0>] ? iget5_locked+0x240/0x240
2017-02-14T00:12:59.819657+03:00 storage9 [24646.505634] 
[<ffffffffb33873c1>] ? xfs_reclaim_inodes_nr+0x31/0x40
2017-02-14T00:12:59.819781+03:00 storage9 [24646.505766] 
[<ffffffffb3213160>] ? super_cache_scan+0x1a0/0x1b0
2017-02-14T00:12:59.819915+03:00 storage9 [24646.505897] 
[<ffffffffb319bf62>] ? shrink_slab+0x262/0x440
2017-02-14T00:12:59.820100+03:00 storage9 [24646.506028] 
[<ffffffffb319fa6f>] ? shrink_node+0xef/0x2d0
2017-02-14T00:12:59.820176+03:00 storage9 [24646.506158] 
[<ffffffffb319ff42>] ? do_try_to_free_pages+0xc2/0x2b0
2017-02-14T00:12:59.820300+03:00 storage9 [24646.506288] 
[<ffffffffb31a03e2>] ? try_to_free_pages+0xe2/0x1c0
2017-02-14T00:12:59.820442+03:00 storage9 [24646.506415] 
[<ffffffffb318c1a9>] ? __perform_reclaim.isra.80+0x79/0xc0
2017-02-14T00:12:59.820556+03:00 storage9 [24646.506542] 
[<ffffffffb318f901>] ? __alloc_pages_nodemask+0x861/0x16b0
2017-02-14T00:12:59.820683+03:00 storage9 [24646.506669] 
[<ffffffffb345d64b>] ? __radix_tree_lookup+0x7b/0xe0
2017-02-14T00:12:59.820851+03:00 storage9 [24646.506796] 
[<ffffffffb31deb2a>] ? alloc_pages_current+0x9a/0x120
2017-02-14T00:12:59.820942+03:00 storage9 [24646.506922] 
[<ffffffffb3188b56>] ? filemap_fault+0x396/0x540
2017-02-14T00:12:59.821072+03:00 storage9 [24646.507048] 
[<ffffffffb345c6b8>] ? radix_tree_next_chunk+0x108/0x2a0
2017-02-14T00:12:59.821192+03:00 storage9 [24646.507176] 
[<ffffffffb32a5bdf>] ? ext4_filemap_fault+0x3f/0x60
2017-02-14T00:12:59.821318+03:00 storage9 [24646.507303] 
[<ffffffffb31b8591>] ? __do_fault+0x71/0x120
2017-02-14T00:12:59.821441+03:00 storage9 [24646.507428] 
[<ffffffffb31be36d>] ? handle_mm_fault+0xdcd/0x14b0
2017-02-14T00:12:59.821574+03:00 storage9 [24646.507555] 
[<ffffffffb31ea9a4>] ? kmem_cache_free+0x204/0x220
2017-02-14T00:12:59.821697+03:00 storage9 [24646.507682] 
[<ffffffffb32114b9>] ? __fput+0x149/0x200
2017-02-14T00:12:59.821824+03:00 storage9 [24646.507806] 
[<ffffffffb30461ff>] ? __do_page_fault+0x1cf/0x5a0
2017-02-14T00:12:59.821954+03:00 storage9 [24646.507934] 
[<ffffffffb3843362>] ? page_fault+0x22/0x30

Even kswapd gets stuck:

2017-02-14T00:13:10.306351+03:00 storage9 [24656.991375] MemAlloc: 
kswapd0(114) flags=0xa40840 switches=22109 uninterruptible
2017-02-14T00:13:10.306351+03:00 storage9 [24656.991622] kswapd0         D
2017-02-14T00:13:10.306473+03:00 storage9     0   114      2 0x00000000
2017-02-14T00:13:10.306482+03:00 storage9 [24656.991753]  0000000000000000
2017-02-14T00:13:10.306482+03:00 storage9  ffffa43830be0f00
2017-02-14T00:13:10.306488+03:00 storage9  ffffa44342ff2880
2017-02-14T00:13:10.306601+03:00 storage9  ffffa443600e0d80
2017-02-14T00:13:10.306714+03:00 storage9
2017-02-14T00:13:10.306723+03:00 storage9 [24656.992000]  ffffa4437ad14300
2017-02-14T00:13:10.306723+03:00 storage9  ffffb1ca8d2737d8
2017-02-14T00:13:10.306731+03:00 storage9  ffffffffb383e799
2017-02-14T00:13:10.306846+03:00 storage9  ffffb1ca8d273788
2017-02-14T00:13:10.306956+03:00 storage9
2017-02-14T00:13:10.306957+03:00 storage9 [24656.992245]  ffffb1ca8d273788
2017-02-14T00:13:10.306957+03:00 storage9  ffffb1ca8d273798
2017-02-14T00:13:10.307011+03:00 storage9  ffffb1ca8d273798
2017-02-14T00:13:10.307087+03:00 storage9  0000000002400001
2017-02-14T00:13:10.307200+03:00 storage9
2017-02-14T00:13:10.307321+03:00 storage9 [24656.992489] Call Trace:
2017-02-14T00:13:10.307475+03:00 storage9 [24656.992611] 
[<ffffffffb383e799>] ? __schedule+0x179/0x5c8
2017-02-14T00:13:10.307572+03:00 storage9 [24656.992736] 
[<ffffffffb383ecc2>] ? schedule+0x32/0x80
2017-02-14T00:13:10.307703+03:00 storage9 [24656.992861] 
[<ffffffffb3841002>] ? rwsem_down_read_failed+0xb2/0x100
2017-02-14T00:13:10.307831+03:00 storage9 [24656.992987] 
[<ffffffffb3466004>] ? call_rwsem_down_read_failed+0x14/0x30
2017-02-14T00:13:10.307949+03:00 storage9 [24656.993114] 
[<ffffffffb3840443>] ? down_read+0x13/0x30
2017-02-14T00:13:10.308075+03:00 storage9 [24656.993239] 
[<ffffffffb3373420>] ? xfs_map_blocks+0x90/0x2f0
2017-02-14T00:13:10.308214+03:00 storage9 [24656.993366] 
[<ffffffffb3375256>] ? xfs_do_writepage+0x2b6/0x6a0
2017-02-14T00:13:10.308337+03:00 storage9 [24656.993496] 
[<ffffffffb3245589>] ? submit_bh_wbc+0x169/0x200
2017-02-14T00:13:10.308518+03:00 storage9 [24656.993627] 
[<ffffffffb337566c>] ? xfs_vm_writepage+0x2c/0x50
2017-02-14T00:13:10.308603+03:00 storage9 [24656.993760] 
[<ffffffffb319bb0b>] ? pageout.isra.60+0xeb/0x2e0
2017-02-14T00:13:10.308728+03:00 storage9 [24656.993891] 
[<ffffffffb319df06>] ? shrink_page_list+0x736/0xa50
2017-02-14T00:13:10.308935+03:00 storage9 [24656.994022] 
[<ffffffffb319eb52>] ? shrink_inactive_list+0x202/0x4b0
2017-02-14T00:13:10.308993+03:00 storage9 [24656.994152] 
[<ffffffffb319f4d1>] ? shrink_node_memcg+0x2e1/0x790
2017-02-14T00:13:10.309148+03:00 storage9 [24656.994282] 
[<ffffffffb319fa49>] ? shrink_node+0xc9/0x2d0
2017-02-14T00:13:10.309258+03:00 storage9 [24656.994416] 
[<ffffffffb31a0944>] ? kswapd+0x2e4/0x690
2017-02-14T00:13:10.309854+03:00 storage9 [24656.994546] 
[<ffffffffb31a0660>] ? mem_cgroup_shrink_node+0x1a0/0x1a0
2017-02-14T00:13:10.309854+03:00 storage9 [24656.994678] 
[<ffffffffb31a0660>] ? mem_cgroup_shrink_node+0x1a0/0x1a0
2017-02-14T00:13:10.309854+03:00 storage9 [24656.994808] 
[<ffffffffb30b4fc2>] ? kthread+0xc2/0xe0
2017-02-14T00:13:10.309854+03:00 storage9 [24656.994937] 
[<ffffffffb30b4f00>] ? __kthread_init_worker+0xb0/0xb0
2017-02-14T00:13:10.309910+03:00 storage9 [24656.995069] 
[<ffffffffb38425e2>] ? ret_from_fork+0x22/0x30

Which finally leads to "Kernel panic - not syncing: Out of memory and no 
killable processes..." as no process is able to proceed.

I quickly hacked this:

diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
index 9ef152b..8adfb0a 100644
--- a/fs/xfs/xfs_icache.c
+++ b/fs/xfs/xfs_icache.c
@@ -1254,7 +1254,7 @@ struct xfs_inode *
         xfs_reclaim_work_queue(mp);
         xfs_ail_push_all(mp->m_ail);

-       return xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT, 
&nr_to_scan);
+       return 0; // xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT, 
&nr_to_scan);
  }


We ran 2 of our machines with this patch for a night, no more 
lockups/stalls were detected.

xfsaild does its work asynchronously, so xfs_inodes don't run wild as 
confirmed by slabtop.

I put netconsole logs here: http://aplkv.beget.tech/lkml/xfs/ for anyone 
interested.

-- 
Alexander Polakov | system software engineer | https://beget.com

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
@ 2017-02-15 12:56     ` Alexander Polakov
  0 siblings, 0 replies; 27+ messages in thread
From: Alexander Polakov @ 2017-02-15 12:56 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-mm, linux-xfs, bugzilla-daemon

On 01/24/2017 12:51 AM, Andrew Morton wrote:
>
>
> (switched to email.  Please respond via emailed reply-to-all, not via the
> bugzilla web interface).
>
> A 2100 second page allocation stall!
>

I think we finally figured out the problem using Tetsuo Handa's mallocwd 
patch. It seems like it is in XFS direct reclaim path.

Here's how it goes:

memory is low, rsync goes into direct reclaim, locking xfs mutex in 
xfs_reclaim_inodes_nr():

2017-02-14T00:12:59.811447+03:00 storage9 [24646.497290] MemAlloc: 
rsync(19706) flags=0x404840 switches=8692 seq=340 
gfp=0x27080c0(GFP_KERNEL_ACCOUNT|__GFP_ZERO|__GFP_NOTRACK) order=0 
delay=6795
2017-02-14T00:12:59.811447+03:00 storage9 [24646.497550] rsync           R
2017-02-14T00:12:59.811579+03:00 storage9     0 19706   5000 0x00000000
2017-02-14T00:12:59.811579+03:00 storage9 [24646.497690]  ffffa4361dc36c00
2017-02-14T00:12:59.811591+03:00 storage9  0000000c1dc36c00
2017-02-14T00:12:59.811591+03:00 storage9  ffffa4361dc36c00
2017-02-14T00:12:59.811724+03:00 storage9  ffffa44347792580
2017-02-14T00:12:59.811841+03:00 storage9
2017-02-14T00:12:59.811841+03:00 storage9 [24646.497951]  0000000000000000
2017-02-14T00:12:59.811846+03:00 storage9  ffffb1cab6343458
2017-02-14T00:12:59.811885+03:00 storage9  ffffffffb383e799
2017-02-14T00:12:59.811987+03:00 storage9  0000000000000000
2017-02-14T00:12:59.812103+03:00 storage9
2017-02-14T00:12:59.812103+03:00 storage9 [24646.498208]  ffffa443ffff7a00
2017-02-14T00:12:59.812103+03:00 storage9  0000000000000001
2017-02-14T00:12:59.812104+03:00 storage9  ffffb1cab6343448
2017-02-14T00:12:59.812233+03:00 storage9  0000000000000002
2017-02-14T00:12:59.812350+03:00 storage9
2017-02-14T00:12:59.812475+03:00 storage9 [24646.498462] Call Trace:
2017-02-14T00:12:59.812610+03:00 storage9 [24646.498587] 
[<ffffffffb383e799>] ? __schedule+0x179/0x5c8
2017-02-14T00:12:59.812733+03:00 storage9 [24646.498718] 
[<ffffffffb383ecc2>] ? schedule+0x32/0x80
2017-02-14T00:12:59.812869+03:00 storage9 [24646.498846] 
[<ffffffffb3841229>] ? schedule_timeout+0x159/0x2a0
2017-02-14T00:12:59.812997+03:00 storage9 [24646.498977] 
[<ffffffffb30f1450>] ? add_timer_on+0x130/0x130
2017-02-14T00:12:59.813130+03:00 storage9 [24646.499108] 
[<ffffffffb318ff13>] ? __alloc_pages_nodemask+0xe73/0x16b0
2017-02-14T00:12:59.813263+03:00 storage9 [24646.499240] 
[<ffffffffb31deb2a>] ? alloc_pages_current+0x9a/0x120
2017-02-14T00:12:59.813388+03:00 storage9 [24646.499371] 
[<ffffffffb33d1a51>] ? xfs_buf_allocate_memory+0x171/0x2c0
2017-02-14T00:12:59.813564+03:00 storage9 [24646.499503] 
[<ffffffffb337c48b>] ? xfs_buf_get_map+0x18b/0x1d0
2017-02-14T00:12:59.813654+03:00 storage9 [24646.499634] 
[<ffffffffb337cdcb>] ? xfs_buf_read_map+0x3b/0x160
2017-02-14T00:12:59.813783+03:00 storage9 [24646.499765] 
[<ffffffffb33b71b0>] ? xfs_trans_read_buf_map+0x1f0/0x490
2017-02-14T00:12:59.813931+03:00 storage9 [24646.499897] 
[<ffffffffb3362859>] ? xfs_imap_to_bp+0x79/0x120
2017-02-14T00:12:59.814056+03:00 storage9 [24646.500029] 
[<ffffffffb3394a38>] ? xfs_iflush+0x118/0x380
2017-02-14T00:12:59.814196+03:00 storage9 [24646.500158] 
[<ffffffffb30d6130>] ? wake_atomic_t_function+0x40/0x40
2017-02-14T00:12:59.814314+03:00 storage9 [24646.500289] 
[<ffffffffb3385af4>] ? xfs_reclaim_inode+0x274/0x3f0
2017-02-14T00:12:59.814444+03:00 storage9 [24646.500421] 
[<ffffffffb3385e29>] ? xfs_reclaim_inodes_ag+0x1b9/0x2c0
2017-02-14T00:12:59.814573+03:00 storage9 [24646.500553] 
[<ffffffffb345c6b8>] ? radix_tree_next_chunk+0x108/0x2a0
2017-02-14T00:12:59.814701+03:00 storage9 [24646.500685] 
[<ffffffffb30f0e31>] ? lock_timer_base+0x51/0x70
2017-02-14T00:12:59.814846+03:00 storage9 [24646.500814] 
[<ffffffffb345ca4e>] ? radix_tree_gang_lookup_tag+0xae/0x180
2017-02-14T00:12:59.814966+03:00 storage9 [24646.500946] 
[<ffffffffb336f588>] ? xfs_perag_get_tag+0x48/0x100
2017-02-14T00:12:59.815104+03:00 storage9 [24646.501079] 
[<ffffffffb31b63c1>] ? __list_lru_walk_one.isra.7+0x31/0x120
2017-02-14T00:12:59.815231+03:00 storage9 [24646.501212] 
[<ffffffffb322aed0>] ? iget5_locked+0x240/0x240
2017-02-14T00:12:59.815360+03:00 storage9 [24646.501342] 
[<ffffffffb33873c1>] ? xfs_reclaim_inodes_nr+0x31/0x40
2017-02-14T00:12:59.815489+03:00 storage9 [24646.501472] 
[<ffffffffb3213160>] ? super_cache_scan+0x1a0/0x1b0
2017-02-14T00:12:59.815629+03:00 storage9 [24646.501603] 
[<ffffffffb319bf62>] ? shrink_slab+0x262/0x440
2017-02-14T00:12:59.815760+03:00 storage9 [24646.501734] 
[<ffffffffb319cc8b>] ? drop_slab_node+0x2b/0x60
2017-02-14T00:12:59.815891+03:00 storage9 [24646.501864] 
[<ffffffffb319cd02>] ? drop_slab+0x42/0x70
2017-02-14T00:12:59.816020+03:00 storage9 [24646.501994] 
[<ffffffffb318abc0>] ? out_of_memory+0x220/0x560
2017-02-14T00:12:59.816160+03:00 storage9 [24646.502122] 
[<ffffffffb31903b2>] ? __alloc_pages_nodemask+0x1312/0x16b0
2017-02-14T00:12:59.816285+03:00 storage9 [24646.502255] 
[<ffffffffb31deb2a>] ? alloc_pages_current+0x9a/0x120
2017-02-14T00:12:59.816407+03:00 storage9 [24646.502386] 
[<ffffffffb304b803>] ? pte_alloc_one+0x13/0x40
2017-02-14T00:12:59.816536+03:00 storage9 [24646.502517] 
[<ffffffffb31be21f>] ? handle_mm_fault+0xc7f/0x14b0
2017-02-14T00:12:59.816659+03:00 storage9 [24646.502648] 
[<ffffffffb30461ff>] ? __do_page_fault+0x1cf/0x5a0
2017-02-14T00:12:59.816851+03:00 storage9 [24646.502777] 
[<ffffffffb3843362>] ? page_fault+0x22/0x30

But it cannot get memory, because it's low (?). So it stays blocked.

Other processes do the same but they can't get past the mutex in 
xfs_reclaim_inodes_nr():

2017-02-14T00:12:59.817057+03:00 storage9 [24646.502909] MemAlloc: 
rsync(19707) flags=0x404840 switches=6344 seq=638 
gfp=0x24201ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD) order=0 delay=3257 
uninterruptible
2017-02-14T00:12:59.817062+03:00 storage9 [24646.503167] rsync           D
2017-02-14T00:12:59.817189+03:00 storage9     0 19707   5000 0x00000000
2017-02-14T00:12:59.817197+03:00 storage9 [24646.503299]  0000000000000000
2017-02-14T00:12:59.817197+03:00 storage9  ffffa44347793840
2017-02-14T00:12:59.817207+03:00 storage9  ffffa4277a36ec00
2017-02-14T00:12:59.817316+03:00 storage9  ffffa4361dc35100
2017-02-14T00:12:59.817446+03:00 storage9
2017-02-14T00:12:59.817446+03:00 storage9 [24646.503554]  ffffa4437a514300
2017-02-14T00:12:59.817461+03:00 storage9  ffffb1cab0823638
2017-02-14T00:12:59.817461+03:00 storage9  ffffffffb383e799
2017-02-14T00:12:59.817598+03:00 storage9  ffffa44365967b00
2017-02-14T00:12:59.817699+03:00 storage9
2017-02-14T00:12:59.817699+03:00 storage9 [24646.503809]  ffffb1cab0823680
2017-02-14T00:12:59.817717+03:00 storage9  0000000000000001
2017-02-14T00:12:59.817717+03:00 storage9  ffffa443424062a8
2017-02-14T00:12:59.817832+03:00 storage9  0000000000000000
2017-02-14T00:12:59.817948+03:00 storage9
2017-02-14T00:12:59.818078+03:00 storage9 [24646.504061] Call Trace:
2017-02-14T00:12:59.818211+03:00 storage9 [24646.504189] 
[<ffffffffb383e799>] ? __schedule+0x179/0x5c8
2017-02-14T00:12:59.818339+03:00 storage9 [24646.504321] 
[<ffffffffb383ecc2>] ? schedule+0x32/0x80
2017-02-14T00:12:59.818471+03:00 storage9 [24646.504451] 
[<ffffffffb383ee6e>] ? schedule_preempt_disabled+0xe/0x20
2017-02-14T00:12:59.818611+03:00 storage9 [24646.504582] 
[<ffffffffb38400aa>] ? __mutex_lock_slowpath+0x8a/0x100
2017-02-14T00:12:59.818737+03:00 storage9 [24646.504715] 
[<ffffffffb3840133>] ? mutex_lock+0x13/0x22
2017-02-14T00:12:59.818870+03:00 storage9 [24646.504846] 
[<ffffffffb3385e98>] ? xfs_reclaim_inodes_ag+0x228/0x2c0
2017-02-14T00:12:59.819024+03:00 storage9 [24646.504977] 
[<ffffffffb345c6b8>] ? radix_tree_next_chunk+0x108/0x2a0
2017-02-14T00:12:59.819124+03:00 storage9 [24646.505107] 
[<ffffffffb345ca4e>] ? radix_tree_gang_lookup_tag+0xae/0x180
2017-02-14T00:12:59.819273+03:00 storage9 [24646.505238] 
[<ffffffffb336f588>] ? xfs_perag_get_tag+0x48/0x100
2017-02-14T00:12:59.819397+03:00 storage9 [24646.505370] 
[<ffffffffb31b63c1>] ? __list_lru_walk_one.isra.7+0x31/0x120
2017-02-14T00:12:59.819528+03:00 storage9 [24646.505502] 
[<ffffffffb322aed0>] ? iget5_locked+0x240/0x240
2017-02-14T00:12:59.819657+03:00 storage9 [24646.505634] 
[<ffffffffb33873c1>] ? xfs_reclaim_inodes_nr+0x31/0x40
2017-02-14T00:12:59.819781+03:00 storage9 [24646.505766] 
[<ffffffffb3213160>] ? super_cache_scan+0x1a0/0x1b0
2017-02-14T00:12:59.819915+03:00 storage9 [24646.505897] 
[<ffffffffb319bf62>] ? shrink_slab+0x262/0x440
2017-02-14T00:12:59.820100+03:00 storage9 [24646.506028] 
[<ffffffffb319fa6f>] ? shrink_node+0xef/0x2d0
2017-02-14T00:12:59.820176+03:00 storage9 [24646.506158] 
[<ffffffffb319ff42>] ? do_try_to_free_pages+0xc2/0x2b0
2017-02-14T00:12:59.820300+03:00 storage9 [24646.506288] 
[<ffffffffb31a03e2>] ? try_to_free_pages+0xe2/0x1c0
2017-02-14T00:12:59.820442+03:00 storage9 [24646.506415] 
[<ffffffffb318c1a9>] ? __perform_reclaim.isra.80+0x79/0xc0
2017-02-14T00:12:59.820556+03:00 storage9 [24646.506542] 
[<ffffffffb318f901>] ? __alloc_pages_nodemask+0x861/0x16b0
2017-02-14T00:12:59.820683+03:00 storage9 [24646.506669] 
[<ffffffffb345d64b>] ? __radix_tree_lookup+0x7b/0xe0
2017-02-14T00:12:59.820851+03:00 storage9 [24646.506796] 
[<ffffffffb31deb2a>] ? alloc_pages_current+0x9a/0x120
2017-02-14T00:12:59.820942+03:00 storage9 [24646.506922] 
[<ffffffffb3188b56>] ? filemap_fault+0x396/0x540
2017-02-14T00:12:59.821072+03:00 storage9 [24646.507048] 
[<ffffffffb345c6b8>] ? radix_tree_next_chunk+0x108/0x2a0
2017-02-14T00:12:59.821192+03:00 storage9 [24646.507176] 
[<ffffffffb32a5bdf>] ? ext4_filemap_fault+0x3f/0x60
2017-02-14T00:12:59.821318+03:00 storage9 [24646.507303] 
[<ffffffffb31b8591>] ? __do_fault+0x71/0x120
2017-02-14T00:12:59.821441+03:00 storage9 [24646.507428] 
[<ffffffffb31be36d>] ? handle_mm_fault+0xdcd/0x14b0
2017-02-14T00:12:59.821574+03:00 storage9 [24646.507555] 
[<ffffffffb31ea9a4>] ? kmem_cache_free+0x204/0x220
2017-02-14T00:12:59.821697+03:00 storage9 [24646.507682] 
[<ffffffffb32114b9>] ? __fput+0x149/0x200
2017-02-14T00:12:59.821824+03:00 storage9 [24646.507806] 
[<ffffffffb30461ff>] ? __do_page_fault+0x1cf/0x5a0
2017-02-14T00:12:59.821954+03:00 storage9 [24646.507934] 
[<ffffffffb3843362>] ? page_fault+0x22/0x30

Even kswapd gets stuck:

2017-02-14T00:13:10.306351+03:00 storage9 [24656.991375] MemAlloc: 
kswapd0(114) flags=0xa40840 switches=22109 uninterruptible
2017-02-14T00:13:10.306351+03:00 storage9 [24656.991622] kswapd0         D
2017-02-14T00:13:10.306473+03:00 storage9     0   114      2 0x00000000
2017-02-14T00:13:10.306482+03:00 storage9 [24656.991753]  0000000000000000
2017-02-14T00:13:10.306482+03:00 storage9  ffffa43830be0f00
2017-02-14T00:13:10.306488+03:00 storage9  ffffa44342ff2880
2017-02-14T00:13:10.306601+03:00 storage9  ffffa443600e0d80
2017-02-14T00:13:10.306714+03:00 storage9
2017-02-14T00:13:10.306723+03:00 storage9 [24656.992000]  ffffa4437ad14300
2017-02-14T00:13:10.306723+03:00 storage9  ffffb1ca8d2737d8
2017-02-14T00:13:10.306731+03:00 storage9  ffffffffb383e799
2017-02-14T00:13:10.306846+03:00 storage9  ffffb1ca8d273788
2017-02-14T00:13:10.306956+03:00 storage9
2017-02-14T00:13:10.306957+03:00 storage9 [24656.992245]  ffffb1ca8d273788
2017-02-14T00:13:10.306957+03:00 storage9  ffffb1ca8d273798
2017-02-14T00:13:10.307011+03:00 storage9  ffffb1ca8d273798
2017-02-14T00:13:10.307087+03:00 storage9  0000000002400001
2017-02-14T00:13:10.307200+03:00 storage9
2017-02-14T00:13:10.307321+03:00 storage9 [24656.992489] Call Trace:
2017-02-14T00:13:10.307475+03:00 storage9 [24656.992611] 
[<ffffffffb383e799>] ? __schedule+0x179/0x5c8
2017-02-14T00:13:10.307572+03:00 storage9 [24656.992736] 
[<ffffffffb383ecc2>] ? schedule+0x32/0x80
2017-02-14T00:13:10.307703+03:00 storage9 [24656.992861] 
[<ffffffffb3841002>] ? rwsem_down_read_failed+0xb2/0x100
2017-02-14T00:13:10.307831+03:00 storage9 [24656.992987] 
[<ffffffffb3466004>] ? call_rwsem_down_read_failed+0x14/0x30
2017-02-14T00:13:10.307949+03:00 storage9 [24656.993114] 
[<ffffffffb3840443>] ? down_read+0x13/0x30
2017-02-14T00:13:10.308075+03:00 storage9 [24656.993239] 
[<ffffffffb3373420>] ? xfs_map_blocks+0x90/0x2f0
2017-02-14T00:13:10.308214+03:00 storage9 [24656.993366] 
[<ffffffffb3375256>] ? xfs_do_writepage+0x2b6/0x6a0
2017-02-14T00:13:10.308337+03:00 storage9 [24656.993496] 
[<ffffffffb3245589>] ? submit_bh_wbc+0x169/0x200
2017-02-14T00:13:10.308518+03:00 storage9 [24656.993627] 
[<ffffffffb337566c>] ? xfs_vm_writepage+0x2c/0x50
2017-02-14T00:13:10.308603+03:00 storage9 [24656.993760] 
[<ffffffffb319bb0b>] ? pageout.isra.60+0xeb/0x2e0
2017-02-14T00:13:10.308728+03:00 storage9 [24656.993891] 
[<ffffffffb319df06>] ? shrink_page_list+0x736/0xa50
2017-02-14T00:13:10.308935+03:00 storage9 [24656.994022] 
[<ffffffffb319eb52>] ? shrink_inactive_list+0x202/0x4b0
2017-02-14T00:13:10.308993+03:00 storage9 [24656.994152] 
[<ffffffffb319f4d1>] ? shrink_node_memcg+0x2e1/0x790
2017-02-14T00:13:10.309148+03:00 storage9 [24656.994282] 
[<ffffffffb319fa49>] ? shrink_node+0xc9/0x2d0
2017-02-14T00:13:10.309258+03:00 storage9 [24656.994416] 
[<ffffffffb31a0944>] ? kswapd+0x2e4/0x690
2017-02-14T00:13:10.309854+03:00 storage9 [24656.994546] 
[<ffffffffb31a0660>] ? mem_cgroup_shrink_node+0x1a0/0x1a0
2017-02-14T00:13:10.309854+03:00 storage9 [24656.994678] 
[<ffffffffb31a0660>] ? mem_cgroup_shrink_node+0x1a0/0x1a0
2017-02-14T00:13:10.309854+03:00 storage9 [24656.994808] 
[<ffffffffb30b4fc2>] ? kthread+0xc2/0xe0
2017-02-14T00:13:10.309854+03:00 storage9 [24656.994937] 
[<ffffffffb30b4f00>] ? __kthread_init_worker+0xb0/0xb0
2017-02-14T00:13:10.309910+03:00 storage9 [24656.995069] 
[<ffffffffb38425e2>] ? ret_from_fork+0x22/0x30

Which finally leads to "Kernel panic - not syncing: Out of memory and no 
killable processes..." as no process is able to proceed.

I quickly hacked this:

diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
index 9ef152b..8adfb0a 100644
--- a/fs/xfs/xfs_icache.c
+++ b/fs/xfs/xfs_icache.c
@@ -1254,7 +1254,7 @@ struct xfs_inode *
         xfs_reclaim_work_queue(mp);
         xfs_ail_push_all(mp->m_ail);

-       return xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT, 
&nr_to_scan);
+       return 0; // xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT, 
&nr_to_scan);
  }


We ran 2 of our machines with this patch for a night, no more 
lockups/stalls were detected.

xfsaild does its work asynchronously, so xfs_inodes don't run wild as 
confirmed by slabtop.

I put netconsole logs here: http://aplkv.beget.tech/lkml/xfs/ for anyone 
interested.

-- 
Alexander Polakov | system software engineer | https://beget.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
  2017-02-15 12:56     ` Alexander Polakov
@ 2017-02-15 16:05       ` Brian Foster
  -1 siblings, 0 replies; 27+ messages in thread
From: Brian Foster @ 2017-02-15 16:05 UTC (permalink / raw)
  To: Alexander Polakov; +Cc: Andrew Morton, linux-mm, linux-xfs, bugzilla-daemon

On Wed, Feb 15, 2017 at 03:56:56PM +0300, Alexander Polakov wrote:
> On 01/24/2017 12:51 AM, Andrew Morton wrote:
> > 
> > 
> > (switched to email.  Please respond via emailed reply-to-all, not via the
> > bugzilla web interface).
> > 
> > A 2100 second page allocation stall!
> > 
> 
> I think we finally figured out the problem using Tetsuo Handa's mallocwd
> patch. It seems like it is in XFS direct reclaim path.
> 
> Here's how it goes:
> 
> memory is low, rsync goes into direct reclaim, locking xfs mutex in
> xfs_reclaim_inodes_nr():
> 
> 2017-02-14T00:12:59.811447+03:00 storage9 [24646.497290] MemAlloc:
> rsync(19706) flags=0x404840 switches=8692 seq=340
> gfp=0x27080c0(GFP_KERNEL_ACCOUNT|__GFP_ZERO|__GFP_NOTRACK) order=0
> delay=6795
> 2017-02-14T00:12:59.811447+03:00 storage9 [24646.497550] rsync           R
> 2017-02-14T00:12:59.811579+03:00 storage9     0 19706   5000 0x00000000
> 2017-02-14T00:12:59.811579+03:00 storage9 [24646.497690]  ffffa4361dc36c00
> 2017-02-14T00:12:59.811591+03:00 storage9  0000000c1dc36c00
> 2017-02-14T00:12:59.811591+03:00 storage9  ffffa4361dc36c00
> 2017-02-14T00:12:59.811724+03:00 storage9  ffffa44347792580
> 2017-02-14T00:12:59.811841+03:00 storage9
> 2017-02-14T00:12:59.811841+03:00 storage9 [24646.497951]  0000000000000000
> 2017-02-14T00:12:59.811846+03:00 storage9  ffffb1cab6343458
> 2017-02-14T00:12:59.811885+03:00 storage9  ffffffffb383e799
> 2017-02-14T00:12:59.811987+03:00 storage9  0000000000000000
> 2017-02-14T00:12:59.812103+03:00 storage9
> 2017-02-14T00:12:59.812103+03:00 storage9 [24646.498208]  ffffa443ffff7a00
> 2017-02-14T00:12:59.812103+03:00 storage9  0000000000000001
> 2017-02-14T00:12:59.812104+03:00 storage9  ffffb1cab6343448
> 2017-02-14T00:12:59.812233+03:00 storage9  0000000000000002
> 2017-02-14T00:12:59.812350+03:00 storage9
> 2017-02-14T00:12:59.812475+03:00 storage9 [24646.498462] Call Trace:
> 2017-02-14T00:12:59.812610+03:00 storage9 [24646.498587]
> [<ffffffffb383e799>] ? __schedule+0x179/0x5c8
> 2017-02-14T00:12:59.812733+03:00 storage9 [24646.498718]
> [<ffffffffb383ecc2>] ? schedule+0x32/0x80
> 2017-02-14T00:12:59.812869+03:00 storage9 [24646.498846]
> [<ffffffffb3841229>] ? schedule_timeout+0x159/0x2a0
> 2017-02-14T00:12:59.812997+03:00 storage9 [24646.498977]
> [<ffffffffb30f1450>] ? add_timer_on+0x130/0x130
> 2017-02-14T00:12:59.813130+03:00 storage9 [24646.499108]
> [<ffffffffb318ff13>] ? __alloc_pages_nodemask+0xe73/0x16b0
> 2017-02-14T00:12:59.813263+03:00 storage9 [24646.499240]
> [<ffffffffb31deb2a>] ? alloc_pages_current+0x9a/0x120
> 2017-02-14T00:12:59.813388+03:00 storage9 [24646.499371]
> [<ffffffffb33d1a51>] ? xfs_buf_allocate_memory+0x171/0x2c0
> 2017-02-14T00:12:59.813564+03:00 storage9 [24646.499503]
> [<ffffffffb337c48b>] ? xfs_buf_get_map+0x18b/0x1d0
> 2017-02-14T00:12:59.813654+03:00 storage9 [24646.499634]
> [<ffffffffb337cdcb>] ? xfs_buf_read_map+0x3b/0x160
> 2017-02-14T00:12:59.813783+03:00 storage9 [24646.499765]
> [<ffffffffb33b71b0>] ? xfs_trans_read_buf_map+0x1f0/0x490
> 2017-02-14T00:12:59.813931+03:00 storage9 [24646.499897]
> [<ffffffffb3362859>] ? xfs_imap_to_bp+0x79/0x120
> 2017-02-14T00:12:59.814056+03:00 storage9 [24646.500029]
> [<ffffffffb3394a38>] ? xfs_iflush+0x118/0x380
> 2017-02-14T00:12:59.814196+03:00 storage9 [24646.500158]
> [<ffffffffb30d6130>] ? wake_atomic_t_function+0x40/0x40
> 2017-02-14T00:12:59.814314+03:00 storage9 [24646.500289]
> [<ffffffffb3385af4>] ? xfs_reclaim_inode+0x274/0x3f0
> 2017-02-14T00:12:59.814444+03:00 storage9 [24646.500421]
> [<ffffffffb3385e29>] ? xfs_reclaim_inodes_ag+0x1b9/0x2c0
> 2017-02-14T00:12:59.814573+03:00 storage9 [24646.500553]
> [<ffffffffb345c6b8>] ? radix_tree_next_chunk+0x108/0x2a0
> 2017-02-14T00:12:59.814701+03:00 storage9 [24646.500685]
> [<ffffffffb30f0e31>] ? lock_timer_base+0x51/0x70
> 2017-02-14T00:12:59.814846+03:00 storage9 [24646.500814]
> [<ffffffffb345ca4e>] ? radix_tree_gang_lookup_tag+0xae/0x180
> 2017-02-14T00:12:59.814966+03:00 storage9 [24646.500946]
> [<ffffffffb336f588>] ? xfs_perag_get_tag+0x48/0x100
> 2017-02-14T00:12:59.815104+03:00 storage9 [24646.501079]
> [<ffffffffb31b63c1>] ? __list_lru_walk_one.isra.7+0x31/0x120
> 2017-02-14T00:12:59.815231+03:00 storage9 [24646.501212]
> [<ffffffffb322aed0>] ? iget5_locked+0x240/0x240
> 2017-02-14T00:12:59.815360+03:00 storage9 [24646.501342]
> [<ffffffffb33873c1>] ? xfs_reclaim_inodes_nr+0x31/0x40
> 2017-02-14T00:12:59.815489+03:00 storage9 [24646.501472]
> [<ffffffffb3213160>] ? super_cache_scan+0x1a0/0x1b0
> 2017-02-14T00:12:59.815629+03:00 storage9 [24646.501603]
> [<ffffffffb319bf62>] ? shrink_slab+0x262/0x440
> 2017-02-14T00:12:59.815760+03:00 storage9 [24646.501734]
> [<ffffffffb319cc8b>] ? drop_slab_node+0x2b/0x60
> 2017-02-14T00:12:59.815891+03:00 storage9 [24646.501864]
> [<ffffffffb319cd02>] ? drop_slab+0x42/0x70
> 2017-02-14T00:12:59.816020+03:00 storage9 [24646.501994]
> [<ffffffffb318abc0>] ? out_of_memory+0x220/0x560
> 2017-02-14T00:12:59.816160+03:00 storage9 [24646.502122]
> [<ffffffffb31903b2>] ? __alloc_pages_nodemask+0x1312/0x16b0
> 2017-02-14T00:12:59.816285+03:00 storage9 [24646.502255]
> [<ffffffffb31deb2a>] ? alloc_pages_current+0x9a/0x120
> 2017-02-14T00:12:59.816407+03:00 storage9 [24646.502386]
> [<ffffffffb304b803>] ? pte_alloc_one+0x13/0x40
> 2017-02-14T00:12:59.816536+03:00 storage9 [24646.502517]
> [<ffffffffb31be21f>] ? handle_mm_fault+0xc7f/0x14b0
> 2017-02-14T00:12:59.816659+03:00 storage9 [24646.502648]
> [<ffffffffb30461ff>] ? __do_page_fault+0x1cf/0x5a0
> 2017-02-14T00:12:59.816851+03:00 storage9 [24646.502777]
> [<ffffffffb3843362>] ? page_fault+0x22/0x30
> 

You're in inode reclaim, blocked on a memory allocation for an inode
buffer required to flush a dirty inode. I suppose this means that the
backing buffer for the inode has already been reclaimed and must be
re-read, which ideally wouldn't have occurred before the inode is
flushed.

> But it cannot get memory, because it's low (?). So it stays blocked.
> 
> Other processes do the same but they can't get past the mutex in
> xfs_reclaim_inodes_nr():
> 
...
> Which finally leads to "Kernel panic - not syncing: Out of memory and no
> killable processes..." as no process is able to proceed.
> 
> I quickly hacked this:
> 
> diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
> index 9ef152b..8adfb0a 100644
> --- a/fs/xfs/xfs_icache.c
> +++ b/fs/xfs/xfs_icache.c
> @@ -1254,7 +1254,7 @@ struct xfs_inode *
>         xfs_reclaim_work_queue(mp);
>         xfs_ail_push_all(mp->m_ail);
> 
> -       return xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT,
> &nr_to_scan);
> +       return 0; // xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT,
> &nr_to_scan);
>  }
> 

So you've disabled inode reclaim completely...

The bz shows you have non-default vm settings such as
'vm.vfs_cache_pressure = 200.' My understanding is that prefers
aggressive inode reclaim, yet the code workaround here is to bypass XFS
inode reclaim. Out of curiousity, have you reproduced this problem using
the default vfs_cache_pressure value (or if so, possibly moving it in
the other direction)?

Brian

> 
> We ran 2 of our machines with this patch for a night, no more lockups/stalls
> were detected.
> 
> xfsaild does its work asynchronously, so xfs_inodes don't run wild as
> confirmed by slabtop.
> 
> I put netconsole logs here: http://aplkv.beget.tech/lkml/xfs/ for anyone
> interested.
> 
> -- 
> Alexander Polakov | system software engineer | https://beget.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
@ 2017-02-15 16:05       ` Brian Foster
  0 siblings, 0 replies; 27+ messages in thread
From: Brian Foster @ 2017-02-15 16:05 UTC (permalink / raw)
  To: Alexander Polakov; +Cc: Andrew Morton, linux-mm, linux-xfs, bugzilla-daemon

On Wed, Feb 15, 2017 at 03:56:56PM +0300, Alexander Polakov wrote:
> On 01/24/2017 12:51 AM, Andrew Morton wrote:
> > 
> > 
> > (switched to email.  Please respond via emailed reply-to-all, not via the
> > bugzilla web interface).
> > 
> > A 2100 second page allocation stall!
> > 
> 
> I think we finally figured out the problem using Tetsuo Handa's mallocwd
> patch. It seems like it is in XFS direct reclaim path.
> 
> Here's how it goes:
> 
> memory is low, rsync goes into direct reclaim, locking xfs mutex in
> xfs_reclaim_inodes_nr():
> 
> 2017-02-14T00:12:59.811447+03:00 storage9 [24646.497290] MemAlloc:
> rsync(19706) flags=0x404840 switches=8692 seq=340
> gfp=0x27080c0(GFP_KERNEL_ACCOUNT|__GFP_ZERO|__GFP_NOTRACK) order=0
> delay=6795
> 2017-02-14T00:12:59.811447+03:00 storage9 [24646.497550] rsync           R
> 2017-02-14T00:12:59.811579+03:00 storage9     0 19706   5000 0x00000000
> 2017-02-14T00:12:59.811579+03:00 storage9 [24646.497690]  ffffa4361dc36c00
> 2017-02-14T00:12:59.811591+03:00 storage9  0000000c1dc36c00
> 2017-02-14T00:12:59.811591+03:00 storage9  ffffa4361dc36c00
> 2017-02-14T00:12:59.811724+03:00 storage9  ffffa44347792580
> 2017-02-14T00:12:59.811841+03:00 storage9
> 2017-02-14T00:12:59.811841+03:00 storage9 [24646.497951]  0000000000000000
> 2017-02-14T00:12:59.811846+03:00 storage9  ffffb1cab6343458
> 2017-02-14T00:12:59.811885+03:00 storage9  ffffffffb383e799
> 2017-02-14T00:12:59.811987+03:00 storage9  0000000000000000
> 2017-02-14T00:12:59.812103+03:00 storage9
> 2017-02-14T00:12:59.812103+03:00 storage9 [24646.498208]  ffffa443ffff7a00
> 2017-02-14T00:12:59.812103+03:00 storage9  0000000000000001
> 2017-02-14T00:12:59.812104+03:00 storage9  ffffb1cab6343448
> 2017-02-14T00:12:59.812233+03:00 storage9  0000000000000002
> 2017-02-14T00:12:59.812350+03:00 storage9
> 2017-02-14T00:12:59.812475+03:00 storage9 [24646.498462] Call Trace:
> 2017-02-14T00:12:59.812610+03:00 storage9 [24646.498587]
> [<ffffffffb383e799>] ? __schedule+0x179/0x5c8
> 2017-02-14T00:12:59.812733+03:00 storage9 [24646.498718]
> [<ffffffffb383ecc2>] ? schedule+0x32/0x80
> 2017-02-14T00:12:59.812869+03:00 storage9 [24646.498846]
> [<ffffffffb3841229>] ? schedule_timeout+0x159/0x2a0
> 2017-02-14T00:12:59.812997+03:00 storage9 [24646.498977]
> [<ffffffffb30f1450>] ? add_timer_on+0x130/0x130
> 2017-02-14T00:12:59.813130+03:00 storage9 [24646.499108]
> [<ffffffffb318ff13>] ? __alloc_pages_nodemask+0xe73/0x16b0
> 2017-02-14T00:12:59.813263+03:00 storage9 [24646.499240]
> [<ffffffffb31deb2a>] ? alloc_pages_current+0x9a/0x120
> 2017-02-14T00:12:59.813388+03:00 storage9 [24646.499371]
> [<ffffffffb33d1a51>] ? xfs_buf_allocate_memory+0x171/0x2c0
> 2017-02-14T00:12:59.813564+03:00 storage9 [24646.499503]
> [<ffffffffb337c48b>] ? xfs_buf_get_map+0x18b/0x1d0
> 2017-02-14T00:12:59.813654+03:00 storage9 [24646.499634]
> [<ffffffffb337cdcb>] ? xfs_buf_read_map+0x3b/0x160
> 2017-02-14T00:12:59.813783+03:00 storage9 [24646.499765]
> [<ffffffffb33b71b0>] ? xfs_trans_read_buf_map+0x1f0/0x490
> 2017-02-14T00:12:59.813931+03:00 storage9 [24646.499897]
> [<ffffffffb3362859>] ? xfs_imap_to_bp+0x79/0x120
> 2017-02-14T00:12:59.814056+03:00 storage9 [24646.500029]
> [<ffffffffb3394a38>] ? xfs_iflush+0x118/0x380
> 2017-02-14T00:12:59.814196+03:00 storage9 [24646.500158]
> [<ffffffffb30d6130>] ? wake_atomic_t_function+0x40/0x40
> 2017-02-14T00:12:59.814314+03:00 storage9 [24646.500289]
> [<ffffffffb3385af4>] ? xfs_reclaim_inode+0x274/0x3f0
> 2017-02-14T00:12:59.814444+03:00 storage9 [24646.500421]
> [<ffffffffb3385e29>] ? xfs_reclaim_inodes_ag+0x1b9/0x2c0
> 2017-02-14T00:12:59.814573+03:00 storage9 [24646.500553]
> [<ffffffffb345c6b8>] ? radix_tree_next_chunk+0x108/0x2a0
> 2017-02-14T00:12:59.814701+03:00 storage9 [24646.500685]
> [<ffffffffb30f0e31>] ? lock_timer_base+0x51/0x70
> 2017-02-14T00:12:59.814846+03:00 storage9 [24646.500814]
> [<ffffffffb345ca4e>] ? radix_tree_gang_lookup_tag+0xae/0x180
> 2017-02-14T00:12:59.814966+03:00 storage9 [24646.500946]
> [<ffffffffb336f588>] ? xfs_perag_get_tag+0x48/0x100
> 2017-02-14T00:12:59.815104+03:00 storage9 [24646.501079]
> [<ffffffffb31b63c1>] ? __list_lru_walk_one.isra.7+0x31/0x120
> 2017-02-14T00:12:59.815231+03:00 storage9 [24646.501212]
> [<ffffffffb322aed0>] ? iget5_locked+0x240/0x240
> 2017-02-14T00:12:59.815360+03:00 storage9 [24646.501342]
> [<ffffffffb33873c1>] ? xfs_reclaim_inodes_nr+0x31/0x40
> 2017-02-14T00:12:59.815489+03:00 storage9 [24646.501472]
> [<ffffffffb3213160>] ? super_cache_scan+0x1a0/0x1b0
> 2017-02-14T00:12:59.815629+03:00 storage9 [24646.501603]
> [<ffffffffb319bf62>] ? shrink_slab+0x262/0x440
> 2017-02-14T00:12:59.815760+03:00 storage9 [24646.501734]
> [<ffffffffb319cc8b>] ? drop_slab_node+0x2b/0x60
> 2017-02-14T00:12:59.815891+03:00 storage9 [24646.501864]
> [<ffffffffb319cd02>] ? drop_slab+0x42/0x70
> 2017-02-14T00:12:59.816020+03:00 storage9 [24646.501994]
> [<ffffffffb318abc0>] ? out_of_memory+0x220/0x560
> 2017-02-14T00:12:59.816160+03:00 storage9 [24646.502122]
> [<ffffffffb31903b2>] ? __alloc_pages_nodemask+0x1312/0x16b0
> 2017-02-14T00:12:59.816285+03:00 storage9 [24646.502255]
> [<ffffffffb31deb2a>] ? alloc_pages_current+0x9a/0x120
> 2017-02-14T00:12:59.816407+03:00 storage9 [24646.502386]
> [<ffffffffb304b803>] ? pte_alloc_one+0x13/0x40
> 2017-02-14T00:12:59.816536+03:00 storage9 [24646.502517]
> [<ffffffffb31be21f>] ? handle_mm_fault+0xc7f/0x14b0
> 2017-02-14T00:12:59.816659+03:00 storage9 [24646.502648]
> [<ffffffffb30461ff>] ? __do_page_fault+0x1cf/0x5a0
> 2017-02-14T00:12:59.816851+03:00 storage9 [24646.502777]
> [<ffffffffb3843362>] ? page_fault+0x22/0x30
> 

You're in inode reclaim, blocked on a memory allocation for an inode
buffer required to flush a dirty inode. I suppose this means that the
backing buffer for the inode has already been reclaimed and must be
re-read, which ideally wouldn't have occurred before the inode is
flushed.

> But it cannot get memory, because it's low (?). So it stays blocked.
> 
> Other processes do the same but they can't get past the mutex in
> xfs_reclaim_inodes_nr():
> 
...
> Which finally leads to "Kernel panic - not syncing: Out of memory and no
> killable processes..." as no process is able to proceed.
> 
> I quickly hacked this:
> 
> diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
> index 9ef152b..8adfb0a 100644
> --- a/fs/xfs/xfs_icache.c
> +++ b/fs/xfs/xfs_icache.c
> @@ -1254,7 +1254,7 @@ struct xfs_inode *
>         xfs_reclaim_work_queue(mp);
>         xfs_ail_push_all(mp->m_ail);
> 
> -       return xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT,
> &nr_to_scan);
> +       return 0; // xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT,
> &nr_to_scan);
>  }
> 

So you've disabled inode reclaim completely...

The bz shows you have non-default vm settings such as
'vm.vfs_cache_pressure = 200.' My understanding is that prefers
aggressive inode reclaim, yet the code workaround here is to bypass XFS
inode reclaim. Out of curiousity, have you reproduced this problem using
the default vfs_cache_pressure value (or if so, possibly moving it in
the other direction)?

Brian

> 
> We ran 2 of our machines with this patch for a night, no more lockups/stalls
> were detected.
> 
> xfsaild does its work asynchronously, so xfs_inodes don't run wild as
> confirmed by slabtop.
> 
> I put netconsole logs here: http://aplkv.beget.tech/lkml/xfs/ for anyone
> interested.
> 
> -- 
> Alexander Polakov | system software engineer | https://beget.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
  2017-02-15 16:05       ` Brian Foster
@ 2017-02-15 16:52         ` Alexander Polakov
  -1 siblings, 0 replies; 27+ messages in thread
From: Alexander Polakov @ 2017-02-15 16:52 UTC (permalink / raw)
  To: Brian Foster; +Cc: linux-mm, linux-xfs, bugzilla-daemon

On 02/15/2017 07:05 PM, Brian Foster wrote:
> You're in inode reclaim, blocked on a memory allocation for an inode
> buffer required to flush a dirty inode. I suppose this means that the
> backing buffer for the inode has already been reclaimed and must be
> re-read, which ideally wouldn't have occurred before the inode is
> flushed.
>
>> But it cannot get memory, because it's low (?). So it stays blocked.
>>
>> Other processes do the same but they can't get past the mutex in
>> xfs_reclaim_inodes_nr():
>>
> ...
>> Which finally leads to "Kernel panic - not syncing: Out of memory and no
>> killable processes..." as no process is able to proceed.
>>
>> I quickly hacked this:
>>
>> diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
>> index 9ef152b..8adfb0a 100644
>> --- a/fs/xfs/xfs_icache.c
>> +++ b/fs/xfs/xfs_icache.c
>> @@ -1254,7 +1254,7 @@ struct xfs_inode *
>>         xfs_reclaim_work_queue(mp);
>>         xfs_ail_push_all(mp->m_ail);
>>
>> -       return xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT,
>> &nr_to_scan);
>> +       return 0; // xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT,
>> &nr_to_scan);
>>  }
>>
>
> So you've disabled inode reclaim completely...

I don't think this is correct. I disabled direct / kswapd reclaim.
XFS uses background worker for async reclaim:

http://lxr.free-electrons.com/source/fs/xfs/xfs_icache.c#L178
http://lxr.free-electrons.com/source/fs/xfs/xfs_super.c#L1534

Confirmed by running trace-cmd on a patched kernel:

# trace-cmd record -p function -l xfs_reclaim_inodes -l xfs_reclaim_worker
# # trace-cmd report
CPU 0 is empty
CPU 2 is empty
CPU 3 is empty
CPU 5 is empty
CPU 8 is empty
CPU 10 is empty
CPU 11 is empty
cpus=16
     kworker/12:2-31208 [012] 106450.590216: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106450.590226: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106450.756879: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106450.756882: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106450.920212: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106450.920215: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106451.083549: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106451.083552: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106451.246882: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106451.246885: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106451.413546: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106451.413548: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106451.580215: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106451.580217: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106451.743549: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106451.743550: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106451.906882: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106451.906885: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106452.070216: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106452.070219: function: 
xfs_reclaim_inodes
      kworker/7:0-14419 [007] 106454.730218: function: 
xfs_reclaim_worker
      kworker/7:0-14419 [007] 106454.730227: function: 
xfs_reclaim_inodes
      kworker/1:0-14025 [001] 106455.340221: function: 
xfs_reclaim_worker
      kworker/1:0-14025 [001] 106455.340225: function: 
xfs_reclaim_inodes

> The bz shows you have non-default vm settings such as
> 'vm.vfs_cache_pressure = 200.' My understanding is that prefers
> aggressive inode reclaim, yet the code workaround here is to bypass XFS
> inode reclaim. Out of curiousity, have you reproduced this problem using
> the default vfs_cache_pressure value (or if so, possibly moving it in
> the other direction)?

Yes, we've tried that, it had about 0 influence.

-- 
Alexander Polakov | system software engineer | https://beget.com

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
@ 2017-02-15 16:52         ` Alexander Polakov
  0 siblings, 0 replies; 27+ messages in thread
From: Alexander Polakov @ 2017-02-15 16:52 UTC (permalink / raw)
  To: Brian Foster; +Cc: linux-mm, linux-xfs, bugzilla-daemon

On 02/15/2017 07:05 PM, Brian Foster wrote:
> You're in inode reclaim, blocked on a memory allocation for an inode
> buffer required to flush a dirty inode. I suppose this means that the
> backing buffer for the inode has already been reclaimed and must be
> re-read, which ideally wouldn't have occurred before the inode is
> flushed.
>
>> But it cannot get memory, because it's low (?). So it stays blocked.
>>
>> Other processes do the same but they can't get past the mutex in
>> xfs_reclaim_inodes_nr():
>>
> ...
>> Which finally leads to "Kernel panic - not syncing: Out of memory and no
>> killable processes..." as no process is able to proceed.
>>
>> I quickly hacked this:
>>
>> diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
>> index 9ef152b..8adfb0a 100644
>> --- a/fs/xfs/xfs_icache.c
>> +++ b/fs/xfs/xfs_icache.c
>> @@ -1254,7 +1254,7 @@ struct xfs_inode *
>>         xfs_reclaim_work_queue(mp);
>>         xfs_ail_push_all(mp->m_ail);
>>
>> -       return xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT,
>> &nr_to_scan);
>> +       return 0; // xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT,
>> &nr_to_scan);
>>  }
>>
>
> So you've disabled inode reclaim completely...

I don't think this is correct. I disabled direct / kswapd reclaim.
XFS uses background worker for async reclaim:

http://lxr.free-electrons.com/source/fs/xfs/xfs_icache.c#L178
http://lxr.free-electrons.com/source/fs/xfs/xfs_super.c#L1534

Confirmed by running trace-cmd on a patched kernel:

# trace-cmd record -p function -l xfs_reclaim_inodes -l xfs_reclaim_worker
# # trace-cmd report
CPU 0 is empty
CPU 2 is empty
CPU 3 is empty
CPU 5 is empty
CPU 8 is empty
CPU 10 is empty
CPU 11 is empty
cpus=16
     kworker/12:2-31208 [012] 106450.590216: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106450.590226: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106450.756879: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106450.756882: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106450.920212: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106450.920215: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106451.083549: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106451.083552: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106451.246882: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106451.246885: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106451.413546: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106451.413548: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106451.580215: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106451.580217: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106451.743549: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106451.743550: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106451.906882: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106451.906885: function: 
xfs_reclaim_inodes
     kworker/12:2-31208 [012] 106452.070216: function: 
xfs_reclaim_worker
     kworker/12:2-31208 [012] 106452.070219: function: 
xfs_reclaim_inodes
      kworker/7:0-14419 [007] 106454.730218: function: 
xfs_reclaim_worker
      kworker/7:0-14419 [007] 106454.730227: function: 
xfs_reclaim_inodes
      kworker/1:0-14025 [001] 106455.340221: function: 
xfs_reclaim_worker
      kworker/1:0-14025 [001] 106455.340225: function: 
xfs_reclaim_inodes

> The bz shows you have non-default vm settings such as
> 'vm.vfs_cache_pressure = 200.' My understanding is that prefers
> aggressive inode reclaim, yet the code workaround here is to bypass XFS
> inode reclaim. Out of curiousity, have you reproduced this problem using
> the default vfs_cache_pressure value (or if so, possibly moving it in
> the other direction)?

Yes, we've tried that, it had about 0 influence.

-- 
Alexander Polakov | system software engineer | https://beget.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
  2017-02-15 16:52         ` Alexander Polakov
@ 2017-02-15 18:09           ` Brian Foster
  -1 siblings, 0 replies; 27+ messages in thread
From: Brian Foster @ 2017-02-15 18:09 UTC (permalink / raw)
  To: Alexander Polakov; +Cc: linux-mm, linux-xfs, bugzilla-daemon

On Wed, Feb 15, 2017 at 07:52:13PM +0300, Alexander Polakov wrote:
> On 02/15/2017 07:05 PM, Brian Foster wrote:
> > You're in inode reclaim, blocked on a memory allocation for an inode
> > buffer required to flush a dirty inode. I suppose this means that the
> > backing buffer for the inode has already been reclaimed and must be
> > re-read, which ideally wouldn't have occurred before the inode is
> > flushed.
> > 
> > > But it cannot get memory, because it's low (?). So it stays blocked.
> > > 
> > > Other processes do the same but they can't get past the mutex in
> > > xfs_reclaim_inodes_nr():
> > > 
> > ...
> > > Which finally leads to "Kernel panic - not syncing: Out of memory and no
> > > killable processes..." as no process is able to proceed.
> > > 
> > > I quickly hacked this:
> > > 
> > > diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
> > > index 9ef152b..8adfb0a 100644
> > > --- a/fs/xfs/xfs_icache.c
> > > +++ b/fs/xfs/xfs_icache.c
> > > @@ -1254,7 +1254,7 @@ struct xfs_inode *
> > >         xfs_reclaim_work_queue(mp);
> > >         xfs_ail_push_all(mp->m_ail);
> > > 
> > > -       return xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT,
> > > &nr_to_scan);
> > > +       return 0; // xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT,
> > > &nr_to_scan);
> > >  }
> > > 
> > 
> > So you've disabled inode reclaim completely...
> 
> I don't think this is correct. I disabled direct / kswapd reclaim.
> XFS uses background worker for async reclaim:
> 
> http://lxr.free-electrons.com/source/fs/xfs/xfs_icache.c#L178
> http://lxr.free-electrons.com/source/fs/xfs/xfs_super.c#L1534
> 

Ah, Ok. It sounds like this allows the reclaim thread to carry on into
other shrinkers and free up memory that way, perhaps. This sounds kind
of similar to the issue brought up previously here[1], but not quite the
same in that instead of backing off of locking to allow other shrinkers
to progress, we back off of memory allocations required to free up
inodes (memory).

In theory, I think something analogous to a trylock for inode to buffer
mappings that are no longer cached (or more specifically, cannot
currently be allocated) may work around this, but it's not immediately
clear to me whether that's a proper fix (it's also probably not a
trivial change either). I'm still kind of curious why we end up with
dirty inodes with reclaimed buffers. If this problem repeats, is it
always with a similar stack (i.e., reclaim -> xfs_iflush() ->
xfs_imap_to_bp())?

How many independent filesystems are you running this workload against?
Can you describe the workload in more detail?

...
> > The bz shows you have non-default vm settings such as
> > 'vm.vfs_cache_pressure = 200.' My understanding is that prefers
> > aggressive inode reclaim, yet the code workaround here is to bypass XFS
> > inode reclaim. Out of curiousity, have you reproduced this problem using
> > the default vfs_cache_pressure value (or if so, possibly moving it in
> > the other direction)?
> 
> Yes, we've tried that, it had about 0 influence.
> 

Which.. with what values? And by zero influence, do you simply mean the
stall still occurred or you have some other measurement of slab sizes or
some such that are unaffected?

Brian

> -- 
> Alexander Polakov | system software engineer | https://beget.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
@ 2017-02-15 18:09           ` Brian Foster
  0 siblings, 0 replies; 27+ messages in thread
From: Brian Foster @ 2017-02-15 18:09 UTC (permalink / raw)
  To: Alexander Polakov; +Cc: linux-mm, linux-xfs, bugzilla-daemon

On Wed, Feb 15, 2017 at 07:52:13PM +0300, Alexander Polakov wrote:
> On 02/15/2017 07:05 PM, Brian Foster wrote:
> > You're in inode reclaim, blocked on a memory allocation for an inode
> > buffer required to flush a dirty inode. I suppose this means that the
> > backing buffer for the inode has already been reclaimed and must be
> > re-read, which ideally wouldn't have occurred before the inode is
> > flushed.
> > 
> > > But it cannot get memory, because it's low (?). So it stays blocked.
> > > 
> > > Other processes do the same but they can't get past the mutex in
> > > xfs_reclaim_inodes_nr():
> > > 
> > ...
> > > Which finally leads to "Kernel panic - not syncing: Out of memory and no
> > > killable processes..." as no process is able to proceed.
> > > 
> > > I quickly hacked this:
> > > 
> > > diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
> > > index 9ef152b..8adfb0a 100644
> > > --- a/fs/xfs/xfs_icache.c
> > > +++ b/fs/xfs/xfs_icache.c
> > > @@ -1254,7 +1254,7 @@ struct xfs_inode *
> > >         xfs_reclaim_work_queue(mp);
> > >         xfs_ail_push_all(mp->m_ail);
> > > 
> > > -       return xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT,
> > > &nr_to_scan);
> > > +       return 0; // xfs_reclaim_inodes_ag(mp, SYNC_TRYLOCK | SYNC_WAIT,
> > > &nr_to_scan);
> > >  }
> > > 
> > 
> > So you've disabled inode reclaim completely...
> 
> I don't think this is correct. I disabled direct / kswapd reclaim.
> XFS uses background worker for async reclaim:
> 
> http://lxr.free-electrons.com/source/fs/xfs/xfs_icache.c#L178
> http://lxr.free-electrons.com/source/fs/xfs/xfs_super.c#L1534
> 

Ah, Ok. It sounds like this allows the reclaim thread to carry on into
other shrinkers and free up memory that way, perhaps. This sounds kind
of similar to the issue brought up previously here[1], but not quite the
same in that instead of backing off of locking to allow other shrinkers
to progress, we back off of memory allocations required to free up
inodes (memory).

In theory, I think something analogous to a trylock for inode to buffer
mappings that are no longer cached (or more specifically, cannot
currently be allocated) may work around this, but it's not immediately
clear to me whether that's a proper fix (it's also probably not a
trivial change either). I'm still kind of curious why we end up with
dirty inodes with reclaimed buffers. If this problem repeats, is it
always with a similar stack (i.e., reclaim -> xfs_iflush() ->
xfs_imap_to_bp())?

How many independent filesystems are you running this workload against?
Can you describe the workload in more detail?

...
> > The bz shows you have non-default vm settings such as
> > 'vm.vfs_cache_pressure = 200.' My understanding is that prefers
> > aggressive inode reclaim, yet the code workaround here is to bypass XFS
> > inode reclaim. Out of curiousity, have you reproduced this problem using
> > the default vfs_cache_pressure value (or if so, possibly moving it in
> > the other direction)?
> 
> Yes, we've tried that, it had about 0 influence.
> 

Which.. with what values? And by zero influence, do you simply mean the
stall still occurred or you have some other measurement of slab sizes or
some such that are unaffected?

Brian

> -- 
> Alexander Polakov | system software engineer | https://beget.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
  2017-02-15 18:09           ` Brian Foster
@ 2017-02-16 10:56             ` Alexander Polakov
  -1 siblings, 0 replies; 27+ messages in thread
From: Alexander Polakov @ 2017-02-16 10:56 UTC (permalink / raw)
  To: Brian Foster; +Cc: linux-mm, linux-xfs, bugzilla-daemon

On 02/15/2017 09:09 PM, Brian Foster wrote:
> Ah, Ok. It sounds like this allows the reclaim thread to carry on into
> other shrinkers and free up memory that way, perhaps. This sounds kind
> of similar to the issue brought up previously here[1], but not quite the
> same in that instead of backing off of locking to allow other shrinkers
> to progress, we back off of memory allocations required to free up
> inodes (memory).
>
> In theory, I think something analogous to a trylock for inode to buffer
> mappings that are no longer cached (or more specifically, cannot
> currently be allocated) may work around this, but it's not immediately
> clear to me whether that's a proper fix (it's also probably not a
> trivial change either). I'm still kind of curious why we end up with
> dirty inodes with reclaimed buffers. If this problem repeats, is it
> always with a similar stack (i.e., reclaim -> xfs_iflush() ->
> xfs_imap_to_bp())?

Looks like it is.

> How many independent filesystems are you running this workload against?

storage9 : ~ [0] # mount|grep storage|grep xfs|wc -l
15
storage9 : ~ [0] # mount|grep storage|grep ext4|wc -l
44

> Can you describe the workload in more detail?

This is a backup server, we're running rsync. At night our production 
servers rsync their files to this server (a lot of small files).

> ...
>>> The bz shows you have non-default vm settings such as
>>> 'vm.vfs_cache_pressure = 200.' My understanding is that prefers
>>> aggressive inode reclaim, yet the code workaround here is to bypass XFS
>>> inode reclaim. Out of curiousity, have you reproduced this problem using
>>> the default vfs_cache_pressure value (or if so, possibly moving it in
>>> the other direction)?
>>
>> Yes, we've tried that, it had about 0 influence.
>>
>
> Which.. with what values? And by zero influence, do you simply mean the
> stall still occurred or you have some other measurement of slab sizes or
> some such that are unaffected?

Unfortunately I don't have slab statistics at hand. Stalls and following 
OOM situation still occured with this setting at 100.

-- 
Alexander Polakov | system software engineer | https://beget.com

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
@ 2017-02-16 10:56             ` Alexander Polakov
  0 siblings, 0 replies; 27+ messages in thread
From: Alexander Polakov @ 2017-02-16 10:56 UTC (permalink / raw)
  To: Brian Foster; +Cc: linux-mm, linux-xfs, bugzilla-daemon

On 02/15/2017 09:09 PM, Brian Foster wrote:
> Ah, Ok. It sounds like this allows the reclaim thread to carry on into
> other shrinkers and free up memory that way, perhaps. This sounds kind
> of similar to the issue brought up previously here[1], but not quite the
> same in that instead of backing off of locking to allow other shrinkers
> to progress, we back off of memory allocations required to free up
> inodes (memory).
>
> In theory, I think something analogous to a trylock for inode to buffer
> mappings that are no longer cached (or more specifically, cannot
> currently be allocated) may work around this, but it's not immediately
> clear to me whether that's a proper fix (it's also probably not a
> trivial change either). I'm still kind of curious why we end up with
> dirty inodes with reclaimed buffers. If this problem repeats, is it
> always with a similar stack (i.e., reclaim -> xfs_iflush() ->
> xfs_imap_to_bp())?

Looks like it is.

> How many independent filesystems are you running this workload against?

storage9 : ~ [0] # mount|grep storage|grep xfs|wc -l
15
storage9 : ~ [0] # mount|grep storage|grep ext4|wc -l
44

> Can you describe the workload in more detail?

This is a backup server, we're running rsync. At night our production 
servers rsync their files to this server (a lot of small files).

> ...
>>> The bz shows you have non-default vm settings such as
>>> 'vm.vfs_cache_pressure = 200.' My understanding is that prefers
>>> aggressive inode reclaim, yet the code workaround here is to bypass XFS
>>> inode reclaim. Out of curiousity, have you reproduced this problem using
>>> the default vfs_cache_pressure value (or if so, possibly moving it in
>>> the other direction)?
>>
>> Yes, we've tried that, it had about 0 influence.
>>
>
> Which.. with what values? And by zero influence, do you simply mean the
> stall still occurred or you have some other measurement of slab sizes or
> some such that are unaffected?

Unfortunately I don't have slab statistics at hand. Stalls and following 
OOM situation still occured with this setting at 100.

-- 
Alexander Polakov | system software engineer | https://beget.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
  2017-02-16 10:56             ` Alexander Polakov
@ 2017-02-16 17:20               ` Brian Foster
  -1 siblings, 0 replies; 27+ messages in thread
From: Brian Foster @ 2017-02-16 17:20 UTC (permalink / raw)
  To: Alexander Polakov; +Cc: linux-mm, linux-xfs, bugzilla-daemon, Dave Chinner

On Thu, Feb 16, 2017 at 01:56:30PM +0300, Alexander Polakov wrote:
> On 02/15/2017 09:09 PM, Brian Foster wrote:
> > Ah, Ok. It sounds like this allows the reclaim thread to carry on into
> > other shrinkers and free up memory that way, perhaps. This sounds kind
> > of similar to the issue brought up previously here[1], but not quite the
> > same in that instead of backing off of locking to allow other shrinkers
> > to progress, we back off of memory allocations required to free up
> > inodes (memory).
> > 
> > In theory, I think something analogous to a trylock for inode to buffer
> > mappings that are no longer cached (or more specifically, cannot
> > currently be allocated) may work around this, but it's not immediately
> > clear to me whether that's a proper fix (it's also probably not a
> > trivial change either). I'm still kind of curious why we end up with
> > dirty inodes with reclaimed buffers. If this problem repeats, is it
> > always with a similar stack (i.e., reclaim -> xfs_iflush() ->
> > xfs_imap_to_bp())?
> 
> Looks like it is.
> 
> > How many independent filesystems are you running this workload against?
> 
> storage9 : ~ [0] # mount|grep storage|grep xfs|wc -l
> 15
> storage9 : ~ [0] # mount|grep storage|grep ext4|wc -l
> 44
> 

So a decent number of fs', more ext4 than XFS. Are the XFS fs' all of
similar size/geometry? If so, can you send representative xfs_info
output for the fs'?

I'm reading back through that reclaim thread[1] and it appears this
indeed is not a straightforward issue. It sounds like the summary is
Chris hit the same general behavior you have and is helped by bypassing
the synchronous nature of the shrinker. This allows other shrinkers to
proceed, but this is not a general solution because other workloads
depend on the synchronous shrinker behavior to throttle direct reclaim.
I can't say I understand all of the details and architecture of how/why
that is the case.

FWIW, it sounds like the first order problem is that we generally don't
want to find/flush dirty inodes from reclaim. A couple things that might
help avoid this situation are more aggressive
/proc/sys/fs/xfs/xfssyncd_centisecs tuning or perhaps considering a
smaller log size would cause more tail pushing pressure on the AIL
instead of pressure originating from memory reclaim. The latter might
not be so convenient if this is an already populated backup server,
though.

Beyond that, there's Chris' patch, another patch that Dave proposed[2],
and obviously your hack here to defer inode reclaim entirely to the
workqueue (I've CC'd Dave since it sounds like he might have been
working on this further..). Unfortunately, it sounds like [1] or your
hack aren't things we can pull into mainline for the time being until
the broader reclaim queueing/throttling mechanism is in place. I suppose
we might be able to revisit [2] if it actually does enough to prevent
this problem...

[1] http://www.spinics.net/lists/linux-xfs/msg01541.html
[2] http://www.spinics.net/lists/linux-xfs/msg02212.html

Brian

> > Can you describe the workload in more detail?
> 
> This is a backup server, we're running rsync. At night our production
> servers rsync their files to this server (a lot of small files).
> 
> > ...
> > > > The bz shows you have non-default vm settings such as
> > > > 'vm.vfs_cache_pressure = 200.' My understanding is that prefers
> > > > aggressive inode reclaim, yet the code workaround here is to bypass XFS
> > > > inode reclaim. Out of curiousity, have you reproduced this problem using
> > > > the default vfs_cache_pressure value (or if so, possibly moving it in
> > > > the other direction)?
> > > 
> > > Yes, we've tried that, it had about 0 influence.
> > > 
> > 
> > Which.. with what values? And by zero influence, do you simply mean the
> > stall still occurred or you have some other measurement of slab sizes or
> > some such that are unaffected?
> 
> Unfortunately I don't have slab statistics at hand. Stalls and following OOM
> situation still occured with this setting at 100.
> 
> -- 
> Alexander Polakov | system software engineer | https://beget.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
@ 2017-02-16 17:20               ` Brian Foster
  0 siblings, 0 replies; 27+ messages in thread
From: Brian Foster @ 2017-02-16 17:20 UTC (permalink / raw)
  To: Alexander Polakov; +Cc: linux-mm, linux-xfs, bugzilla-daemon, Dave Chinner

On Thu, Feb 16, 2017 at 01:56:30PM +0300, Alexander Polakov wrote:
> On 02/15/2017 09:09 PM, Brian Foster wrote:
> > Ah, Ok. It sounds like this allows the reclaim thread to carry on into
> > other shrinkers and free up memory that way, perhaps. This sounds kind
> > of similar to the issue brought up previously here[1], but not quite the
> > same in that instead of backing off of locking to allow other shrinkers
> > to progress, we back off of memory allocations required to free up
> > inodes (memory).
> > 
> > In theory, I think something analogous to a trylock for inode to buffer
> > mappings that are no longer cached (or more specifically, cannot
> > currently be allocated) may work around this, but it's not immediately
> > clear to me whether that's a proper fix (it's also probably not a
> > trivial change either). I'm still kind of curious why we end up with
> > dirty inodes with reclaimed buffers. If this problem repeats, is it
> > always with a similar stack (i.e., reclaim -> xfs_iflush() ->
> > xfs_imap_to_bp())?
> 
> Looks like it is.
> 
> > How many independent filesystems are you running this workload against?
> 
> storage9 : ~ [0] # mount|grep storage|grep xfs|wc -l
> 15
> storage9 : ~ [0] # mount|grep storage|grep ext4|wc -l
> 44
> 

So a decent number of fs', more ext4 than XFS. Are the XFS fs' all of
similar size/geometry? If so, can you send representative xfs_info
output for the fs'?

I'm reading back through that reclaim thread[1] and it appears this
indeed is not a straightforward issue. It sounds like the summary is
Chris hit the same general behavior you have and is helped by bypassing
the synchronous nature of the shrinker. This allows other shrinkers to
proceed, but this is not a general solution because other workloads
depend on the synchronous shrinker behavior to throttle direct reclaim.
I can't say I understand all of the details and architecture of how/why
that is the case.

FWIW, it sounds like the first order problem is that we generally don't
want to find/flush dirty inodes from reclaim. A couple things that might
help avoid this situation are more aggressive
/proc/sys/fs/xfs/xfssyncd_centisecs tuning or perhaps considering a
smaller log size would cause more tail pushing pressure on the AIL
instead of pressure originating from memory reclaim. The latter might
not be so convenient if this is an already populated backup server,
though.

Beyond that, there's Chris' patch, another patch that Dave proposed[2],
and obviously your hack here to defer inode reclaim entirely to the
workqueue (I've CC'd Dave since it sounds like he might have been
working on this further..). Unfortunately, it sounds like [1] or your
hack aren't things we can pull into mainline for the time being until
the broader reclaim queueing/throttling mechanism is in place. I suppose
we might be able to revisit [2] if it actually does enough to prevent
this problem...

[1] http://www.spinics.net/lists/linux-xfs/msg01541.html
[2] http://www.spinics.net/lists/linux-xfs/msg02212.html

Brian

> > Can you describe the workload in more detail?
> 
> This is a backup server, we're running rsync. At night our production
> servers rsync their files to this server (a lot of small files).
> 
> > ...
> > > > The bz shows you have non-default vm settings such as
> > > > 'vm.vfs_cache_pressure = 200.' My understanding is that prefers
> > > > aggressive inode reclaim, yet the code workaround here is to bypass XFS
> > > > inode reclaim. Out of curiousity, have you reproduced this problem using
> > > > the default vfs_cache_pressure value (or if so, possibly moving it in
> > > > the other direction)?
> > > 
> > > Yes, we've tried that, it had about 0 influence.
> > > 
> > 
> > Which.. with what values? And by zero influence, do you simply mean the
> > stall still occurred or you have some other measurement of slab sizes or
> > some such that are unaffected?
> 
> Unfortunately I don't have slab statistics at hand. Stalls and following OOM
> situation still occured with this setting at 100.
> 
> -- 
> Alexander Polakov | system software engineer | https://beget.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
  2017-02-16 17:20               ` Brian Foster
@ 2017-02-16 22:21                 ` Dave Chinner
  -1 siblings, 0 replies; 27+ messages in thread
From: Dave Chinner @ 2017-02-16 22:21 UTC (permalink / raw)
  To: Brian Foster; +Cc: Alexander Polakov, linux-mm, linux-xfs, bugzilla-daemon

On Thu, Feb 16, 2017 at 12:20:34PM -0500, Brian Foster wrote:
> On Thu, Feb 16, 2017 at 01:56:30PM +0300, Alexander Polakov wrote:
> > On 02/15/2017 09:09 PM, Brian Foster wrote:
> > > Ah, Ok. It sounds like this allows the reclaim thread to carry on into
> > > other shrinkers and free up memory that way, perhaps. This sounds kind
> > > of similar to the issue brought up previously here[1], but not quite the
> > > same in that instead of backing off of locking to allow other shrinkers
> > > to progress, we back off of memory allocations required to free up
> > > inodes (memory).
> > > 
> > > In theory, I think something analogous to a trylock for inode to buffer
> > > mappings that are no longer cached (or more specifically, cannot
> > > currently be allocated) may work around this, but it's not immediately
> > > clear to me whether that's a proper fix (it's also probably not a
> > > trivial change either). I'm still kind of curious why we end up with
> > > dirty inodes with reclaimed buffers. If this problem repeats, is it
> > > always with a similar stack (i.e., reclaim -> xfs_iflush() ->
> > > xfs_imap_to_bp())?
> > 
> > Looks like it is.
> > 
> > > How many independent filesystems are you running this workload against?
> > 
> > storage9 : ~ [0] # mount|grep storage|grep xfs|wc -l
> > 15
> > storage9 : ~ [0] # mount|grep storage|grep ext4|wc -l
> > 44
> > 
> 
> So a decent number of fs', more ext4 than XFS. Are the XFS fs' all of
> similar size/geometry? If so, can you send representative xfs_info
> output for the fs'?
> 
> I'm reading back through that reclaim thread[1] and it appears this
> indeed is not a straightforward issue. It sounds like the summary is
> Chris hit the same general behavior you have and is helped by bypassing
> the synchronous nature of the shrinker. This allows other shrinkers to
> proceed, but this is not a general solution because other workloads
> depend on the synchronous shrinker behavior to throttle direct reclaim.
> I can't say I understand all of the details and architecture of how/why
> that is the case.

It's complicated, made worse by the state of flux of the mm reclaim
subsystem and the frequent regressions in behaviour that come and
go. This makes testing modifications to the shrinker behaviour
extremely challenging - trying to separate shirnker artifacts from
"something else has changed in memory reclaim" takes a lot of
time....

> FWIW, it sounds like the first order problem is that we generally don't
> want to find/flush dirty inodes from reclaim.

Right. because that forces out-of-order inode writeback and it
degenerates into blocking small random writes.

> A couple things that might
> help avoid this situation are more aggressive
> /proc/sys/fs/xfs/xfssyncd_centisecs tuning or perhaps considering a
> smaller log size would cause more tail pushing pressure on the AIL
> instead of pressure originating from memory reclaim. The latter might
> not be so convenient if this is an already populated backup server,
> though.
> 
> Beyond that, there's Chris' patch, another patch that Dave proposed[2],
> and obviously your hack here to defer inode reclaim entirely to the
> workqueue (I've CC'd Dave since it sounds like he might have been
> working on this further..).

I was working on a more solid set of changes, but every time I
updated the kernel tree I used as my base for development, the
baseline kernel reclaim behaviour would change. I'd isolate the
behavioural change, upgrade to the kernel that contained the fix,
and then trip over some new whacky behaviour that made no sense. I
spent more time in this loop than actually trying to fix the XFS
problem - chasing a moving target makes finding the root cause of
the reclaim stalls just about impossible. 

Brian, I can send you what I have but it's really just a bag of
bolts at this point because I was never able to validate that any of
the patches made a measurable improvement to reclaim behaviour under
any workload I ran.....

FWIW, the major problem with removing the blocking in inode reclaim
is the ease with which you can then trigger the OOM killer from
userspace.  The high level memory reclaim algorithms break down when
there are hundreds of direct reclaim processes hammering on reclaim
and reclaim stops making progress because it's skipping dirty
objects.  Direct reclaim ends up insufficiently throttled, so rather
than blocking it winds up reclaim priority and then declares OOM
because reclaim runs out of retries before sufficient memory has
been freed.

That, right now, looks to be an unsolvable problem without a major
rework of direct reclaim.  I've pretty much given up on ever getting
the unbound direct reclaim concurrency problem that is causing us
these problems fixed, so we are left to handle it in the subsystem
shrinkers as best we can. That leaves us with an unfortunate choice: 

	a) throttle excessive concurrency in the shrinker to prevent
	   IO breakdown, thereby causing reclaim latency bubbles
	   under load but having a stable, reliable system; or
	b) optimise for minimal reclaim latency and risk userspace
	   memory demand triggering the OOM killer whenever there
	   are lots of dirty inodes in the system.

Quite frankly, there's only one choice we can make in this
situation: reliability is always more important than performance.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
@ 2017-02-16 22:21                 ` Dave Chinner
  0 siblings, 0 replies; 27+ messages in thread
From: Dave Chinner @ 2017-02-16 22:21 UTC (permalink / raw)
  To: Brian Foster; +Cc: Alexander Polakov, linux-mm, linux-xfs, bugzilla-daemon

On Thu, Feb 16, 2017 at 12:20:34PM -0500, Brian Foster wrote:
> On Thu, Feb 16, 2017 at 01:56:30PM +0300, Alexander Polakov wrote:
> > On 02/15/2017 09:09 PM, Brian Foster wrote:
> > > Ah, Ok. It sounds like this allows the reclaim thread to carry on into
> > > other shrinkers and free up memory that way, perhaps. This sounds kind
> > > of similar to the issue brought up previously here[1], but not quite the
> > > same in that instead of backing off of locking to allow other shrinkers
> > > to progress, we back off of memory allocations required to free up
> > > inodes (memory).
> > > 
> > > In theory, I think something analogous to a trylock for inode to buffer
> > > mappings that are no longer cached (or more specifically, cannot
> > > currently be allocated) may work around this, but it's not immediately
> > > clear to me whether that's a proper fix (it's also probably not a
> > > trivial change either). I'm still kind of curious why we end up with
> > > dirty inodes with reclaimed buffers. If this problem repeats, is it
> > > always with a similar stack (i.e., reclaim -> xfs_iflush() ->
> > > xfs_imap_to_bp())?
> > 
> > Looks like it is.
> > 
> > > How many independent filesystems are you running this workload against?
> > 
> > storage9 : ~ [0] # mount|grep storage|grep xfs|wc -l
> > 15
> > storage9 : ~ [0] # mount|grep storage|grep ext4|wc -l
> > 44
> > 
> 
> So a decent number of fs', more ext4 than XFS. Are the XFS fs' all of
> similar size/geometry? If so, can you send representative xfs_info
> output for the fs'?
> 
> I'm reading back through that reclaim thread[1] and it appears this
> indeed is not a straightforward issue. It sounds like the summary is
> Chris hit the same general behavior you have and is helped by bypassing
> the synchronous nature of the shrinker. This allows other shrinkers to
> proceed, but this is not a general solution because other workloads
> depend on the synchronous shrinker behavior to throttle direct reclaim.
> I can't say I understand all of the details and architecture of how/why
> that is the case.

It's complicated, made worse by the state of flux of the mm reclaim
subsystem and the frequent regressions in behaviour that come and
go. This makes testing modifications to the shrinker behaviour
extremely challenging - trying to separate shirnker artifacts from
"something else has changed in memory reclaim" takes a lot of
time....

> FWIW, it sounds like the first order problem is that we generally don't
> want to find/flush dirty inodes from reclaim.

Right. because that forces out-of-order inode writeback and it
degenerates into blocking small random writes.

> A couple things that might
> help avoid this situation are more aggressive
> /proc/sys/fs/xfs/xfssyncd_centisecs tuning or perhaps considering a
> smaller log size would cause more tail pushing pressure on the AIL
> instead of pressure originating from memory reclaim. The latter might
> not be so convenient if this is an already populated backup server,
> though.
> 
> Beyond that, there's Chris' patch, another patch that Dave proposed[2],
> and obviously your hack here to defer inode reclaim entirely to the
> workqueue (I've CC'd Dave since it sounds like he might have been
> working on this further..).

I was working on a more solid set of changes, but every time I
updated the kernel tree I used as my base for development, the
baseline kernel reclaim behaviour would change. I'd isolate the
behavioural change, upgrade to the kernel that contained the fix,
and then trip over some new whacky behaviour that made no sense. I
spent more time in this loop than actually trying to fix the XFS
problem - chasing a moving target makes finding the root cause of
the reclaim stalls just about impossible. 

Brian, I can send you what I have but it's really just a bag of
bolts at this point because I was never able to validate that any of
the patches made a measurable improvement to reclaim behaviour under
any workload I ran.....

FWIW, the major problem with removing the blocking in inode reclaim
is the ease with which you can then trigger the OOM killer from
userspace.  The high level memory reclaim algorithms break down when
there are hundreds of direct reclaim processes hammering on reclaim
and reclaim stops making progress because it's skipping dirty
objects.  Direct reclaim ends up insufficiently throttled, so rather
than blocking it winds up reclaim priority and then declares OOM
because reclaim runs out of retries before sufficient memory has
been freed.

That, right now, looks to be an unsolvable problem without a major
rework of direct reclaim.  I've pretty much given up on ever getting
the unbound direct reclaim concurrency problem that is causing us
these problems fixed, so we are left to handle it in the subsystem
shrinkers as best we can. That leaves us with an unfortunate choice: 

	a) throttle excessive concurrency in the shrinker to prevent
	   IO breakdown, thereby causing reclaim latency bubbles
	   under load but having a stable, reliable system; or
	b) optimise for minimal reclaim latency and risk userspace
	   memory demand triggering the OOM killer whenever there
	   are lots of dirty inodes in the system.

Quite frankly, there's only one choice we can make in this
situation: reliability is always more important than performance.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
  2017-02-16 22:21                 ` Dave Chinner
@ 2017-02-17 11:11                   ` Tetsuo Handa
  -1 siblings, 0 replies; 27+ messages in thread
From: Tetsuo Handa @ 2017-02-17 11:11 UTC (permalink / raw)
  To: Dave Chinner, Brian Foster
  Cc: Alexander Polakov, linux-mm, linux-xfs, bugzilla-daemon

On 2017/02/17 7:21, Dave Chinner wrote:
> FWIW, the major problem with removing the blocking in inode reclaim
> is the ease with which you can then trigger the OOM killer from
> userspace.  The high level memory reclaim algorithms break down when
> there are hundreds of direct reclaim processes hammering on reclaim
> and reclaim stops making progress because it's skipping dirty
> objects.  Direct reclaim ends up insufficiently throttled, so rather
> than blocking it winds up reclaim priority and then declares OOM
> because reclaim runs out of retries before sufficient memory has
> been freed.
> 
> That, right now, looks to be an unsolvable problem without a major
> rework of direct reclaim.  I've pretty much given up on ever getting
> the unbound direct reclaim concurrency problem that is causing us
> these problems fixed, so we are left to handle it in the subsystem
> shrinkers as best we can. That leaves us with an unfortunate choice: 
> 
> 	a) throttle excessive concurrency in the shrinker to prevent
> 	   IO breakdown, thereby causing reclaim latency bubbles
> 	   under load but having a stable, reliable system; or
> 	b) optimise for minimal reclaim latency and risk userspace
> 	   memory demand triggering the OOM killer whenever there
> 	   are lots of dirty inodes in the system.
> 
> Quite frankly, there's only one choice we can make in this
> situation: reliability is always more important than performance.

Is it possible to get rid of direct reclaim and let allocating thread
wait on queue? I wished such change in context of __GFP_KILLABLE at
http://lkml.kernel.org/r/201702012049.BAG95379.VJFFOHMStLQFOO@I-love.SAKURA.ne.jp .



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
@ 2017-02-17 11:11                   ` Tetsuo Handa
  0 siblings, 0 replies; 27+ messages in thread
From: Tetsuo Handa @ 2017-02-17 11:11 UTC (permalink / raw)
  To: Dave Chinner, Brian Foster
  Cc: Alexander Polakov, linux-mm, linux-xfs, bugzilla-daemon

On 2017/02/17 7:21, Dave Chinner wrote:
> FWIW, the major problem with removing the blocking in inode reclaim
> is the ease with which you can then trigger the OOM killer from
> userspace.  The high level memory reclaim algorithms break down when
> there are hundreds of direct reclaim processes hammering on reclaim
> and reclaim stops making progress because it's skipping dirty
> objects.  Direct reclaim ends up insufficiently throttled, so rather
> than blocking it winds up reclaim priority and then declares OOM
> because reclaim runs out of retries before sufficient memory has
> been freed.
> 
> That, right now, looks to be an unsolvable problem without a major
> rework of direct reclaim.  I've pretty much given up on ever getting
> the unbound direct reclaim concurrency problem that is causing us
> these problems fixed, so we are left to handle it in the subsystem
> shrinkers as best we can. That leaves us with an unfortunate choice: 
> 
> 	a) throttle excessive concurrency in the shrinker to prevent
> 	   IO breakdown, thereby causing reclaim latency bubbles
> 	   under load but having a stable, reliable system; or
> 	b) optimise for minimal reclaim latency and risk userspace
> 	   memory demand triggering the OOM killer whenever there
> 	   are lots of dirty inodes in the system.
> 
> Quite frankly, there's only one choice we can make in this
> situation: reliability is always more important than performance.

Is it possible to get rid of direct reclaim and let allocating thread
wait on queue? I wished such change in context of __GFP_KILLABLE at
http://lkml.kernel.org/r/201702012049.BAG95379.VJFFOHMStLQFOO@I-love.SAKURA.ne.jp .


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
  2017-02-16 22:21                 ` Dave Chinner
@ 2017-02-17 19:05                   ` Brian Foster
  -1 siblings, 0 replies; 27+ messages in thread
From: Brian Foster @ 2017-02-17 19:05 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Alexander Polakov, linux-mm, linux-xfs, bugzilla-daemon

On Fri, Feb 17, 2017 at 09:21:29AM +1100, Dave Chinner wrote:
> On Thu, Feb 16, 2017 at 12:20:34PM -0500, Brian Foster wrote:
> > On Thu, Feb 16, 2017 at 01:56:30PM +0300, Alexander Polakov wrote:
> > > On 02/15/2017 09:09 PM, Brian Foster wrote:
> > > > Ah, Ok. It sounds like this allows the reclaim thread to carry on into
> > > > other shrinkers and free up memory that way, perhaps. This sounds kind
> > > > of similar to the issue brought up previously here[1], but not quite the
> > > > same in that instead of backing off of locking to allow other shrinkers
> > > > to progress, we back off of memory allocations required to free up
> > > > inodes (memory).
> > > > 
> > > > In theory, I think something analogous to a trylock for inode to buffer
> > > > mappings that are no longer cached (or more specifically, cannot
> > > > currently be allocated) may work around this, but it's not immediately
> > > > clear to me whether that's a proper fix (it's also probably not a
> > > > trivial change either). I'm still kind of curious why we end up with
> > > > dirty inodes with reclaimed buffers. If this problem repeats, is it
> > > > always with a similar stack (i.e., reclaim -> xfs_iflush() ->
> > > > xfs_imap_to_bp())?
> > > 
> > > Looks like it is.
> > > 
> > > > How many independent filesystems are you running this workload against?
> > > 
> > > storage9 : ~ [0] # mount|grep storage|grep xfs|wc -l
> > > 15
> > > storage9 : ~ [0] # mount|grep storage|grep ext4|wc -l
> > > 44
> > > 
> > 
> > So a decent number of fs', more ext4 than XFS. Are the XFS fs' all of
> > similar size/geometry? If so, can you send representative xfs_info
> > output for the fs'?
> > 
> > I'm reading back through that reclaim thread[1] and it appears this
> > indeed is not a straightforward issue. It sounds like the summary is
> > Chris hit the same general behavior you have and is helped by bypassing
> > the synchronous nature of the shrinker. This allows other shrinkers to
> > proceed, but this is not a general solution because other workloads
> > depend on the synchronous shrinker behavior to throttle direct reclaim.
> > I can't say I understand all of the details and architecture of how/why
> > that is the case.
> 
> It's complicated, made worse by the state of flux of the mm reclaim
> subsystem and the frequent regressions in behaviour that come and
> go. This makes testing modifications to the shrinker behaviour
> extremely challenging - trying to separate shirnker artifacts from
> "something else has changed in memory reclaim" takes a lot of
> time....
> 
> > FWIW, it sounds like the first order problem is that we generally don't
> > want to find/flush dirty inodes from reclaim.
> 
> Right. because that forces out-of-order inode writeback and it
> degenerates into blocking small random writes.
> 
> > A couple things that might
> > help avoid this situation are more aggressive
> > /proc/sys/fs/xfs/xfssyncd_centisecs tuning or perhaps considering a
> > smaller log size would cause more tail pushing pressure on the AIL
> > instead of pressure originating from memory reclaim. The latter might
> > not be so convenient if this is an already populated backup server,
> > though.
> > 
> > Beyond that, there's Chris' patch, another patch that Dave proposed[2],
> > and obviously your hack here to defer inode reclaim entirely to the
> > workqueue (I've CC'd Dave since it sounds like he might have been
> > working on this further..).
> 
> I was working on a more solid set of changes, but every time I
> updated the kernel tree I used as my base for development, the
> baseline kernel reclaim behaviour would change. I'd isolate the
> behavioural change, upgrade to the kernel that contained the fix,
> and then trip over some new whacky behaviour that made no sense. I
> spent more time in this loop than actually trying to fix the XFS
> problem - chasing a moving target makes finding the root cause of
> the reclaim stalls just about impossible. 
> 
> Brian, I can send you what I have but it's really just a bag of
> bolts at this point because I was never able to validate that any of
> the patches made a measurable improvement to reclaim behaviour under
> any workload I ran.....
> 

Sure, I'm curious to see what direction this goes in. I would think
anything that provides a backoff to other shrinkers would help this
particular workload where many different filesystems are active. FWIW,
I'd probably also need more details about what workloads you're testing
and how you're measuring improvements and whatnot to try and take any of
that stuff any farther (particularly how you verify the problems with
dropping blocking behavior entirely), though..

> FWIW, the major problem with removing the blocking in inode reclaim
> is the ease with which you can then trigger the OOM killer from
> userspace.  The high level memory reclaim algorithms break down when
> there are hundreds of direct reclaim processes hammering on reclaim
> and reclaim stops making progress because it's skipping dirty
> objects.  Direct reclaim ends up insufficiently throttled, so rather
> than blocking it winds up reclaim priority and then declares OOM
> because reclaim runs out of retries before sufficient memory has
> been freed.
> 

I'd need to spend some time in the shrinker code to grok this, but if
there's such a priority, would switching blocking behavior based on
priority provide a way to mitigate this problem from within the
shrinker? For example, provide non-blocking behavior on the lowest
priority to kick off flushing and allow progress into other shrinkers,
otherwise we flush and wait if the priority is elevated..?

IOW, it sounds like the problem in this case is that we subject the rest
of the allocation infrastructure to delays in configurations where we
are one of N potential shrinkers with reclaimable objects, because we
have to deal with this situation where our one shrinker actually is the
main/primary choke point for multiple allocator -> direct reclaimers.
I'm wondering if some kind of severity parameter managed by the shrinker
infra would help us distinguish between those scenarios (even if it were
a dumb LOW/HIGH priority param, where LOW allows for one pass through
all of the shrinkers to kick off I/O and whatnot before any one of them
should actually block on locks or I/O). Then again, I'm just handwaving
as I'm only just familiarizing with the context and problem.

(I also see no priority in struct shrink_control, so I guess that's an
internal reclaim thing as it is.)

> That, right now, looks to be an unsolvable problem without a major
> rework of direct reclaim.  I've pretty much given up on ever getting
> the unbound direct reclaim concurrency problem that is causing us
> these problems fixed, so we are left to handle it in the subsystem
> shrinkers as best we can. That leaves us with an unfortunate choice: 
> 
> 	a) throttle excessive concurrency in the shrinker to prevent
> 	   IO breakdown, thereby causing reclaim latency bubbles
> 	   under load but having a stable, reliable system; or
> 	b) optimise for minimal reclaim latency and risk userspace
> 	   memory demand triggering the OOM killer whenever there
> 	   are lots of dirty inodes in the system.
> 
> Quite frankly, there's only one choice we can make in this
> situation: reliability is always more important than performance.
> 

Indeed, that certainly makes sense. Thanks.

Brian

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
@ 2017-02-17 19:05                   ` Brian Foster
  0 siblings, 0 replies; 27+ messages in thread
From: Brian Foster @ 2017-02-17 19:05 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Alexander Polakov, linux-mm, linux-xfs, bugzilla-daemon

On Fri, Feb 17, 2017 at 09:21:29AM +1100, Dave Chinner wrote:
> On Thu, Feb 16, 2017 at 12:20:34PM -0500, Brian Foster wrote:
> > On Thu, Feb 16, 2017 at 01:56:30PM +0300, Alexander Polakov wrote:
> > > On 02/15/2017 09:09 PM, Brian Foster wrote:
> > > > Ah, Ok. It sounds like this allows the reclaim thread to carry on into
> > > > other shrinkers and free up memory that way, perhaps. This sounds kind
> > > > of similar to the issue brought up previously here[1], but not quite the
> > > > same in that instead of backing off of locking to allow other shrinkers
> > > > to progress, we back off of memory allocations required to free up
> > > > inodes (memory).
> > > > 
> > > > In theory, I think something analogous to a trylock for inode to buffer
> > > > mappings that are no longer cached (or more specifically, cannot
> > > > currently be allocated) may work around this, but it's not immediately
> > > > clear to me whether that's a proper fix (it's also probably not a
> > > > trivial change either). I'm still kind of curious why we end up with
> > > > dirty inodes with reclaimed buffers. If this problem repeats, is it
> > > > always with a similar stack (i.e., reclaim -> xfs_iflush() ->
> > > > xfs_imap_to_bp())?
> > > 
> > > Looks like it is.
> > > 
> > > > How many independent filesystems are you running this workload against?
> > > 
> > > storage9 : ~ [0] # mount|grep storage|grep xfs|wc -l
> > > 15
> > > storage9 : ~ [0] # mount|grep storage|grep ext4|wc -l
> > > 44
> > > 
> > 
> > So a decent number of fs', more ext4 than XFS. Are the XFS fs' all of
> > similar size/geometry? If so, can you send representative xfs_info
> > output for the fs'?
> > 
> > I'm reading back through that reclaim thread[1] and it appears this
> > indeed is not a straightforward issue. It sounds like the summary is
> > Chris hit the same general behavior you have and is helped by bypassing
> > the synchronous nature of the shrinker. This allows other shrinkers to
> > proceed, but this is not a general solution because other workloads
> > depend on the synchronous shrinker behavior to throttle direct reclaim.
> > I can't say I understand all of the details and architecture of how/why
> > that is the case.
> 
> It's complicated, made worse by the state of flux of the mm reclaim
> subsystem and the frequent regressions in behaviour that come and
> go. This makes testing modifications to the shrinker behaviour
> extremely challenging - trying to separate shirnker artifacts from
> "something else has changed in memory reclaim" takes a lot of
> time....
> 
> > FWIW, it sounds like the first order problem is that we generally don't
> > want to find/flush dirty inodes from reclaim.
> 
> Right. because that forces out-of-order inode writeback and it
> degenerates into blocking small random writes.
> 
> > A couple things that might
> > help avoid this situation are more aggressive
> > /proc/sys/fs/xfs/xfssyncd_centisecs tuning or perhaps considering a
> > smaller log size would cause more tail pushing pressure on the AIL
> > instead of pressure originating from memory reclaim. The latter might
> > not be so convenient if this is an already populated backup server,
> > though.
> > 
> > Beyond that, there's Chris' patch, another patch that Dave proposed[2],
> > and obviously your hack here to defer inode reclaim entirely to the
> > workqueue (I've CC'd Dave since it sounds like he might have been
> > working on this further..).
> 
> I was working on a more solid set of changes, but every time I
> updated the kernel tree I used as my base for development, the
> baseline kernel reclaim behaviour would change. I'd isolate the
> behavioural change, upgrade to the kernel that contained the fix,
> and then trip over some new whacky behaviour that made no sense. I
> spent more time in this loop than actually trying to fix the XFS
> problem - chasing a moving target makes finding the root cause of
> the reclaim stalls just about impossible. 
> 
> Brian, I can send you what I have but it's really just a bag of
> bolts at this point because I was never able to validate that any of
> the patches made a measurable improvement to reclaim behaviour under
> any workload I ran.....
> 

Sure, I'm curious to see what direction this goes in. I would think
anything that provides a backoff to other shrinkers would help this
particular workload where many different filesystems are active. FWIW,
I'd probably also need more details about what workloads you're testing
and how you're measuring improvements and whatnot to try and take any of
that stuff any farther (particularly how you verify the problems with
dropping blocking behavior entirely), though..

> FWIW, the major problem with removing the blocking in inode reclaim
> is the ease with which you can then trigger the OOM killer from
> userspace.  The high level memory reclaim algorithms break down when
> there are hundreds of direct reclaim processes hammering on reclaim
> and reclaim stops making progress because it's skipping dirty
> objects.  Direct reclaim ends up insufficiently throttled, so rather
> than blocking it winds up reclaim priority and then declares OOM
> because reclaim runs out of retries before sufficient memory has
> been freed.
> 

I'd need to spend some time in the shrinker code to grok this, but if
there's such a priority, would switching blocking behavior based on
priority provide a way to mitigate this problem from within the
shrinker? For example, provide non-blocking behavior on the lowest
priority to kick off flushing and allow progress into other shrinkers,
otherwise we flush and wait if the priority is elevated..?

IOW, it sounds like the problem in this case is that we subject the rest
of the allocation infrastructure to delays in configurations where we
are one of N potential shrinkers with reclaimable objects, because we
have to deal with this situation where our one shrinker actually is the
main/primary choke point for multiple allocator -> direct reclaimers.
I'm wondering if some kind of severity parameter managed by the shrinker
infra would help us distinguish between those scenarios (even if it were
a dumb LOW/HIGH priority param, where LOW allows for one pass through
all of the shrinkers to kick off I/O and whatnot before any one of them
should actually block on locks or I/O). Then again, I'm just handwaving
as I'm only just familiarizing with the context and problem.

(I also see no priority in struct shrink_control, so I guess that's an
internal reclaim thing as it is.)

> That, right now, looks to be an unsolvable problem without a major
> rework of direct reclaim.  I've pretty much given up on ever getting
> the unbound direct reclaim concurrency problem that is causing us
> these problems fixed, so we are left to handle it in the subsystem
> shrinkers as best we can. That leaves us with an unfortunate choice: 
> 
> 	a) throttle excessive concurrency in the shrinker to prevent
> 	   IO breakdown, thereby causing reclaim latency bubbles
> 	   under load but having a stable, reliable system; or
> 	b) optimise for minimal reclaim latency and risk userspace
> 	   memory demand triggering the OOM killer whenever there
> 	   are lots of dirty inodes in the system.
> 
> Quite frankly, there's only one choice we can make in this
> situation: reliability is always more important than performance.
> 

Indeed, that certainly makes sense. Thanks.

Brian

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
  2017-02-17 19:05                   ` Brian Foster
@ 2017-02-17 23:52                     ` Dave Chinner
  -1 siblings, 0 replies; 27+ messages in thread
From: Dave Chinner @ 2017-02-17 23:52 UTC (permalink / raw)
  To: Brian Foster; +Cc: Alexander Polakov, linux-mm, linux-xfs, bugzilla-daemon

On Fri, Feb 17, 2017 at 02:05:00PM -0500, Brian Foster wrote:
> On Fri, Feb 17, 2017 at 09:21:29AM +1100, Dave Chinner wrote:
> > On Thu, Feb 16, 2017 at 12:20:34PM -0500, Brian Foster wrote:
> > > A couple things that might
> > > help avoid this situation are more aggressive
> > > /proc/sys/fs/xfs/xfssyncd_centisecs tuning or perhaps considering a
> > > smaller log size would cause more tail pushing pressure on the AIL
> > > instead of pressure originating from memory reclaim. The latter might
> > > not be so convenient if this is an already populated backup server,
> > > though.
> > > 
> > > Beyond that, there's Chris' patch, another patch that Dave proposed[2],
> > > and obviously your hack here to defer inode reclaim entirely to the
> > > workqueue (I've CC'd Dave since it sounds like he might have been
> > > working on this further..).
> > 
> > I was working on a more solid set of changes, but every time I
> > updated the kernel tree I used as my base for development, the
> > baseline kernel reclaim behaviour would change. I'd isolate the
> > behavioural change, upgrade to the kernel that contained the fix,
> > and then trip over some new whacky behaviour that made no sense. I
> > spent more time in this loop than actually trying to fix the XFS
> > problem - chasing a moving target makes finding the root cause of
> > the reclaim stalls just about impossible. 
> > 
> > Brian, I can send you what I have but it's really just a bag of
> > bolts at this point because I was never able to validate that any of
> > the patches made a measurable improvement to reclaim behaviour under
> > any workload I ran.....
> > 
> 
> Sure, I'm curious to see what direction this goes in. I would think
> anything that provides a backoff to other shrinkers would help this
> particular workload where many different filesystems are active. FWIW,
> I'd probably also need more details about what workloads you're testing
> and how you're measuring improvements and whatnot to try and take any of
> that stuff any farther (particularly how you verify the problems with
> dropping blocking behavior entirely), though..

Ok, I'll send you a copy...

> > FWIW, the major problem with removing the blocking in inode reclaim
> > is the ease with which you can then trigger the OOM killer from
> > userspace.  The high level memory reclaim algorithms break down when
> > there are hundreds of direct reclaim processes hammering on reclaim
> > and reclaim stops making progress because it's skipping dirty
> > objects.  Direct reclaim ends up insufficiently throttled, so rather
> > than blocking it winds up reclaim priority and then declares OOM
> > because reclaim runs out of retries before sufficient memory has
> > been freed.
> > 
> 
> I'd need to spend some time in the shrinker code to grok this, but if
> there's such a priority, would switching blocking behavior based on
> priority provide a way to mitigate this problem from within the
> shrinker? For example, provide non-blocking behavior on the lowest
> priority to kick off flushing and allow progress into other shrinkers,
> otherwise we flush and wait if the priority is elevated..?

I tried that - exporting the priority to the shrink_control and so
on. The system either behaved the same (i.e. stalled on reclaim) or
randomly fell into a screaming pile of OOM killer rage-death. There
was no in-between reliable state...

> IOW, it sounds like the problem in this case is that we subject the rest
> of the allocation infrastructure to delays in configurations where we
> are one of N potential shrinkers with reclaimable objects, because we
> have to deal with this situation where our one shrinker actually is the
> main/primary choke point for multiple allocator -> direct reclaimers.

Yup. The mm reclaim design sucks ass because if we use purely
non-blocking reclaim techniques in the shrinkers we have no way of
throttling allocation demand to the rate at which shrinkers can
reclaim objects and we end up with gross cache imbalances under
memory pressure.

> I'm wondering if some kind of severity parameter managed by the shrinker
> infra would help us distinguish between those scenarios (even if it were
> a dumb LOW/HIGH priority param, where LOW allows for one pass through
> all of the shrinkers to kick off I/O and whatnot before any one of them
> should actually block on locks or I/O). Then again, I'm just handwaving
> as I'm only just familiarizing with the context and problem.

Tried that, too. :( Several different ways over the past few years.
Remember that we also have the non-blocking background reclaim
thread - even triggering that immediately on reclaim doesn't
prevent stalls, and my last patchset specifically excluded it
from concurrency control so it always ran immediately on reclaim

Hmmm - I just had a thought.

Perhaps all we need to do is remove direct reclaim from the
shrinker. i.e. make the background reclaimer run on a ticket based
queuing system similar to the log reservations...

For non-blocking reclaim (i.e.  kswapd), we just queue the ticket
and return immediately, saying we've done the scan. 

For blocking reclaim, we take a ticket with the blocking scan count
on it and wait in the wakeup queue.  As background reclaim runs, it
decrements the lead ticket scan count. When it hits zero, wake up
the head of the wait queue. When the background thread has no
pending reclaim tickets or has nothing left to scan/reclaim (i.e.
goes idle), wake everyone....

This means waiting is ordered, it doesn't matter where in the fs we
reclaim from, direct reclaim itself doesn't get stuck waiting for
IO/transactions/log flushing, concurrency is managed by how we
process the ticket queue, and when we have lots of direct reclaim
demand then the background thread just keeps running as optimally as
possible and kswapd is never blocked...

> (I also see no priority in struct shrink_control, so I guess that's an
> internal reclaim thing as it is.)

Yup, I had to promote it all the way through from the struct
scan_control -> priority field.

Cheers,

Dave.

-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
@ 2017-02-17 23:52                     ` Dave Chinner
  0 siblings, 0 replies; 27+ messages in thread
From: Dave Chinner @ 2017-02-17 23:52 UTC (permalink / raw)
  To: Brian Foster; +Cc: Alexander Polakov, linux-mm, linux-xfs, bugzilla-daemon

On Fri, Feb 17, 2017 at 02:05:00PM -0500, Brian Foster wrote:
> On Fri, Feb 17, 2017 at 09:21:29AM +1100, Dave Chinner wrote:
> > On Thu, Feb 16, 2017 at 12:20:34PM -0500, Brian Foster wrote:
> > > A couple things that might
> > > help avoid this situation are more aggressive
> > > /proc/sys/fs/xfs/xfssyncd_centisecs tuning or perhaps considering a
> > > smaller log size would cause more tail pushing pressure on the AIL
> > > instead of pressure originating from memory reclaim. The latter might
> > > not be so convenient if this is an already populated backup server,
> > > though.
> > > 
> > > Beyond that, there's Chris' patch, another patch that Dave proposed[2],
> > > and obviously your hack here to defer inode reclaim entirely to the
> > > workqueue (I've CC'd Dave since it sounds like he might have been
> > > working on this further..).
> > 
> > I was working on a more solid set of changes, but every time I
> > updated the kernel tree I used as my base for development, the
> > baseline kernel reclaim behaviour would change. I'd isolate the
> > behavioural change, upgrade to the kernel that contained the fix,
> > and then trip over some new whacky behaviour that made no sense. I
> > spent more time in this loop than actually trying to fix the XFS
> > problem - chasing a moving target makes finding the root cause of
> > the reclaim stalls just about impossible. 
> > 
> > Brian, I can send you what I have but it's really just a bag of
> > bolts at this point because I was never able to validate that any of
> > the patches made a measurable improvement to reclaim behaviour under
> > any workload I ran.....
> > 
> 
> Sure, I'm curious to see what direction this goes in. I would think
> anything that provides a backoff to other shrinkers would help this
> particular workload where many different filesystems are active. FWIW,
> I'd probably also need more details about what workloads you're testing
> and how you're measuring improvements and whatnot to try and take any of
> that stuff any farther (particularly how you verify the problems with
> dropping blocking behavior entirely), though..

Ok, I'll send you a copy...

> > FWIW, the major problem with removing the blocking in inode reclaim
> > is the ease with which you can then trigger the OOM killer from
> > userspace.  The high level memory reclaim algorithms break down when
> > there are hundreds of direct reclaim processes hammering on reclaim
> > and reclaim stops making progress because it's skipping dirty
> > objects.  Direct reclaim ends up insufficiently throttled, so rather
> > than blocking it winds up reclaim priority and then declares OOM
> > because reclaim runs out of retries before sufficient memory has
> > been freed.
> > 
> 
> I'd need to spend some time in the shrinker code to grok this, but if
> there's such a priority, would switching blocking behavior based on
> priority provide a way to mitigate this problem from within the
> shrinker? For example, provide non-blocking behavior on the lowest
> priority to kick off flushing and allow progress into other shrinkers,
> otherwise we flush and wait if the priority is elevated..?

I tried that - exporting the priority to the shrink_control and so
on. The system either behaved the same (i.e. stalled on reclaim) or
randomly fell into a screaming pile of OOM killer rage-death. There
was no in-between reliable state...

> IOW, it sounds like the problem in this case is that we subject the rest
> of the allocation infrastructure to delays in configurations where we
> are one of N potential shrinkers with reclaimable objects, because we
> have to deal with this situation where our one shrinker actually is the
> main/primary choke point for multiple allocator -> direct reclaimers.

Yup. The mm reclaim design sucks ass because if we use purely
non-blocking reclaim techniques in the shrinkers we have no way of
throttling allocation demand to the rate at which shrinkers can
reclaim objects and we end up with gross cache imbalances under
memory pressure.

> I'm wondering if some kind of severity parameter managed by the shrinker
> infra would help us distinguish between those scenarios (even if it were
> a dumb LOW/HIGH priority param, where LOW allows for one pass through
> all of the shrinkers to kick off I/O and whatnot before any one of them
> should actually block on locks or I/O). Then again, I'm just handwaving
> as I'm only just familiarizing with the context and problem.

Tried that, too. :( Several different ways over the past few years.
Remember that we also have the non-blocking background reclaim
thread - even triggering that immediately on reclaim doesn't
prevent stalls, and my last patchset specifically excluded it
from concurrency control so it always ran immediately on reclaim

Hmmm - I just had a thought.

Perhaps all we need to do is remove direct reclaim from the
shrinker. i.e. make the background reclaimer run on a ticket based
queuing system similar to the log reservations...

For non-blocking reclaim (i.e.  kswapd), we just queue the ticket
and return immediately, saying we've done the scan. 

For blocking reclaim, we take a ticket with the blocking scan count
on it and wait in the wakeup queue.  As background reclaim runs, it
decrements the lead ticket scan count. When it hits zero, wake up
the head of the wait queue. When the background thread has no
pending reclaim tickets or has nothing left to scan/reclaim (i.e.
goes idle), wake everyone....

This means waiting is ordered, it doesn't matter where in the fs we
reclaim from, direct reclaim itself doesn't get stuck waiting for
IO/transactions/log flushing, concurrency is managed by how we
process the ticket queue, and when we have lots of direct reclaim
demand then the background thread just keeps running as optimally as
possible and kswapd is never blocked...

> (I also see no priority in struct shrink_control, so I guess that's an
> internal reclaim thing as it is.)

Yup, I had to promote it all the way through from the struct
scan_control -> priority field.

Cheers,

Dave.

-- 
Dave Chinner
david@fromorbit.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
  2017-02-17 11:11                   ` Tetsuo Handa
@ 2017-02-17 23:58                     ` Dave Chinner
  -1 siblings, 0 replies; 27+ messages in thread
From: Dave Chinner @ 2017-02-17 23:58 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: Brian Foster, Alexander Polakov, linux-mm, linux-xfs, bugzilla-daemon

On Fri, Feb 17, 2017 at 08:11:09PM +0900, Tetsuo Handa wrote:
> On 2017/02/17 7:21, Dave Chinner wrote:
> > FWIW, the major problem with removing the blocking in inode reclaim
> > is the ease with which you can then trigger the OOM killer from
> > userspace.  The high level memory reclaim algorithms break down when
> > there are hundreds of direct reclaim processes hammering on reclaim
> > and reclaim stops making progress because it's skipping dirty
> > objects.  Direct reclaim ends up insufficiently throttled, so rather
> > than blocking it winds up reclaim priority and then declares OOM
> > because reclaim runs out of retries before sufficient memory has
> > been freed.
> > 
> > That, right now, looks to be an unsolvable problem without a major
> > rework of direct reclaim.  I've pretty much given up on ever getting
> > the unbound direct reclaim concurrency problem that is causing us
> > these problems fixed, so we are left to handle it in the subsystem
> > shrinkers as best we can. That leaves us with an unfortunate choice: 
> > 
> > 	a) throttle excessive concurrency in the shrinker to prevent
> > 	   IO breakdown, thereby causing reclaim latency bubbles
> > 	   under load but having a stable, reliable system; or
> > 	b) optimise for minimal reclaim latency and risk userspace
> > 	   memory demand triggering the OOM killer whenever there
> > 	   are lots of dirty inodes in the system.
> > 
> > Quite frankly, there's only one choice we can make in this
> > situation: reliability is always more important than performance.
> 
> Is it possible to get rid of direct reclaim and let allocating thread
> wait on queue? I wished such change in context of __GFP_KILLABLE at
> http://lkml.kernel.org/r/201702012049.BAG95379.VJFFOHMStLQFOO@I-love.SAKURA.ne.jp .

Yup, that's similar to what I've been suggesting - offloading the
direct reclaim slowpath to a limited set of kswapd-like workers
and blocking the allocating processes until there is either memory
for them or OOM is declared...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
@ 2017-02-17 23:58                     ` Dave Chinner
  0 siblings, 0 replies; 27+ messages in thread
From: Dave Chinner @ 2017-02-17 23:58 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: Brian Foster, Alexander Polakov, linux-mm, linux-xfs, bugzilla-daemon

On Fri, Feb 17, 2017 at 08:11:09PM +0900, Tetsuo Handa wrote:
> On 2017/02/17 7:21, Dave Chinner wrote:
> > FWIW, the major problem with removing the blocking in inode reclaim
> > is the ease with which you can then trigger the OOM killer from
> > userspace.  The high level memory reclaim algorithms break down when
> > there are hundreds of direct reclaim processes hammering on reclaim
> > and reclaim stops making progress because it's skipping dirty
> > objects.  Direct reclaim ends up insufficiently throttled, so rather
> > than blocking it winds up reclaim priority and then declares OOM
> > because reclaim runs out of retries before sufficient memory has
> > been freed.
> > 
> > That, right now, looks to be an unsolvable problem without a major
> > rework of direct reclaim.  I've pretty much given up on ever getting
> > the unbound direct reclaim concurrency problem that is causing us
> > these problems fixed, so we are left to handle it in the subsystem
> > shrinkers as best we can. That leaves us with an unfortunate choice: 
> > 
> > 	a) throttle excessive concurrency in the shrinker to prevent
> > 	   IO breakdown, thereby causing reclaim latency bubbles
> > 	   under load but having a stable, reliable system; or
> > 	b) optimise for minimal reclaim latency and risk userspace
> > 	   memory demand triggering the OOM killer whenever there
> > 	   are lots of dirty inodes in the system.
> > 
> > Quite frankly, there's only one choice we can make in this
> > situation: reliability is always more important than performance.
> 
> Is it possible to get rid of direct reclaim and let allocating thread
> wait on queue? I wished such change in context of __GFP_KILLABLE at
> http://lkml.kernel.org/r/201702012049.BAG95379.VJFFOHMStLQFOO@I-love.SAKURA.ne.jp .

Yup, that's similar to what I've been suggesting - offloading the
direct reclaim slowpath to a limited set of kswapd-like workers
and blocking the allocating processes until there is either memory
for them or OOM is declared...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
  2017-02-17 23:52                     ` Dave Chinner
@ 2017-02-18 13:05                       ` Brian Foster
  -1 siblings, 0 replies; 27+ messages in thread
From: Brian Foster @ 2017-02-18 13:05 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Alexander Polakov, linux-mm, linux-xfs, bugzilla-daemon

On Sat, Feb 18, 2017 at 10:52:45AM +1100, Dave Chinner wrote:
> On Fri, Feb 17, 2017 at 02:05:00PM -0500, Brian Foster wrote:
> > On Fri, Feb 17, 2017 at 09:21:29AM +1100, Dave Chinner wrote:
> > > On Thu, Feb 16, 2017 at 12:20:34PM -0500, Brian Foster wrote:
> > > > A couple things that might
> > > > help avoid this situation are more aggressive
> > > > /proc/sys/fs/xfs/xfssyncd_centisecs tuning or perhaps considering a
> > > > smaller log size would cause more tail pushing pressure on the AIL
> > > > instead of pressure originating from memory reclaim. The latter might
> > > > not be so convenient if this is an already populated backup server,
> > > > though.
> > > > 
> > > > Beyond that, there's Chris' patch, another patch that Dave proposed[2],
> > > > and obviously your hack here to defer inode reclaim entirely to the
> > > > workqueue (I've CC'd Dave since it sounds like he might have been
> > > > working on this further..).
> > > 
> > > I was working on a more solid set of changes, but every time I
> > > updated the kernel tree I used as my base for development, the
> > > baseline kernel reclaim behaviour would change. I'd isolate the
> > > behavioural change, upgrade to the kernel that contained the fix,
> > > and then trip over some new whacky behaviour that made no sense. I
> > > spent more time in this loop than actually trying to fix the XFS
> > > problem - chasing a moving target makes finding the root cause of
> > > the reclaim stalls just about impossible. 
> > > 
> > > Brian, I can send you what I have but it's really just a bag of
> > > bolts at this point because I was never able to validate that any of
> > > the patches made a measurable improvement to reclaim behaviour under
> > > any workload I ran.....
> > > 
> > 
> > Sure, I'm curious to see what direction this goes in. I would think
> > anything that provides a backoff to other shrinkers would help this
> > particular workload where many different filesystems are active. FWIW,
> > I'd probably also need more details about what workloads you're testing
> > and how you're measuring improvements and whatnot to try and take any of
> > that stuff any farther (particularly how you verify the problems with
> > dropping blocking behavior entirely), though..
> 
> Ok, I'll send you a copy...
> 

Thanks.

> > > FWIW, the major problem with removing the blocking in inode reclaim
> > > is the ease with which you can then trigger the OOM killer from
> > > userspace.  The high level memory reclaim algorithms break down when
> > > there are hundreds of direct reclaim processes hammering on reclaim
> > > and reclaim stops making progress because it's skipping dirty
> > > objects.  Direct reclaim ends up insufficiently throttled, so rather
> > > than blocking it winds up reclaim priority and then declares OOM
> > > because reclaim runs out of retries before sufficient memory has
> > > been freed.
> > > 
> > 
> > I'd need to spend some time in the shrinker code to grok this, but if
> > there's such a priority, would switching blocking behavior based on
> > priority provide a way to mitigate this problem from within the
> > shrinker? For example, provide non-blocking behavior on the lowest
> > priority to kick off flushing and allow progress into other shrinkers,
> > otherwise we flush and wait if the priority is elevated..?
> 
> I tried that - exporting the priority to the shrink_control and so
> on. The system either behaved the same (i.e. stalled on reclaim) or
> randomly fell into a screaming pile of OOM killer rage-death. There
> was no in-between reliable state...
> 

Hmm, Ok, well at least I'm not completely off the rails then. :)

> > IOW, it sounds like the problem in this case is that we subject the rest
> > of the allocation infrastructure to delays in configurations where we
> > are one of N potential shrinkers with reclaimable objects, because we
> > have to deal with this situation where our one shrinker actually is the
> > main/primary choke point for multiple allocator -> direct reclaimers.
> 
> Yup. The mm reclaim design sucks ass because if we use purely
> non-blocking reclaim techniques in the shrinkers we have no way of
> throttling allocation demand to the rate at which shrinkers can
> reclaim objects and we end up with gross cache imbalances under
> memory pressure.
> 
> > I'm wondering if some kind of severity parameter managed by the shrinker
> > infra would help us distinguish between those scenarios (even if it were
> > a dumb LOW/HIGH priority param, where LOW allows for one pass through
> > all of the shrinkers to kick off I/O and whatnot before any one of them
> > should actually block on locks or I/O). Then again, I'm just handwaving
> > as I'm only just familiarizing with the context and problem.
> 
> Tried that, too. :( Several different ways over the past few years.
> Remember that we also have the non-blocking background reclaim
> thread - even triggering that immediately on reclaim doesn't
> prevent stalls, and my last patchset specifically excluded it
> from concurrency control so it always ran immediately on reclaim
> 
> Hmmm - I just had a thought.
> 
> Perhaps all we need to do is remove direct reclaim from the
> shrinker. i.e. make the background reclaimer run on a ticket based
> queuing system similar to the log reservations...
> 
> For non-blocking reclaim (i.e.  kswapd), we just queue the ticket
> and return immediately, saying we've done the scan. 
> 
> For blocking reclaim, we take a ticket with the blocking scan count
> on it and wait in the wakeup queue.  As background reclaim runs, it
> decrements the lead ticket scan count. When it hits zero, wake up
> the head of the wait queue. When the background thread has no
> pending reclaim tickets or has nothing left to scan/reclaim (i.e.
> goes idle), wake everyone....
> 
> This means waiting is ordered, it doesn't matter where in the fs we
> reclaim from, direct reclaim itself doesn't get stuck waiting for
> IO/transactions/log flushing, concurrency is managed by how we
> process the ticket queue, and when we have lots of direct reclaim
> demand then the background thread just keeps running as optimally as
> possible and kswapd is never blocked...
> 

Interesting.. IIUC, what this adds is smarter waiting logic by mapping
actual calling behavior into the shrinker to underlying reclaim
progress, rather than relying on whatever higher level priority is
calculated in the shrinker mechanism..? Sounds like a neat idea...

Brian

> > (I also see no priority in struct shrink_control, so I guess that's an
> > internal reclaim thing as it is.)
> 
> Yup, I had to promote it all the way through from the struct
> scan_control -> priority field.
> 
> Cheers,
> 
> Dave.
> 
> -- 
> Dave Chinner
> david@fromorbit.com

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [Bug 192981] New: page allocation stalls
@ 2017-02-18 13:05                       ` Brian Foster
  0 siblings, 0 replies; 27+ messages in thread
From: Brian Foster @ 2017-02-18 13:05 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Alexander Polakov, linux-mm, linux-xfs, bugzilla-daemon

On Sat, Feb 18, 2017 at 10:52:45AM +1100, Dave Chinner wrote:
> On Fri, Feb 17, 2017 at 02:05:00PM -0500, Brian Foster wrote:
> > On Fri, Feb 17, 2017 at 09:21:29AM +1100, Dave Chinner wrote:
> > > On Thu, Feb 16, 2017 at 12:20:34PM -0500, Brian Foster wrote:
> > > > A couple things that might
> > > > help avoid this situation are more aggressive
> > > > /proc/sys/fs/xfs/xfssyncd_centisecs tuning or perhaps considering a
> > > > smaller log size would cause more tail pushing pressure on the AIL
> > > > instead of pressure originating from memory reclaim. The latter might
> > > > not be so convenient if this is an already populated backup server,
> > > > though.
> > > > 
> > > > Beyond that, there's Chris' patch, another patch that Dave proposed[2],
> > > > and obviously your hack here to defer inode reclaim entirely to the
> > > > workqueue (I've CC'd Dave since it sounds like he might have been
> > > > working on this further..).
> > > 
> > > I was working on a more solid set of changes, but every time I
> > > updated the kernel tree I used as my base for development, the
> > > baseline kernel reclaim behaviour would change. I'd isolate the
> > > behavioural change, upgrade to the kernel that contained the fix,
> > > and then trip over some new whacky behaviour that made no sense. I
> > > spent more time in this loop than actually trying to fix the XFS
> > > problem - chasing a moving target makes finding the root cause of
> > > the reclaim stalls just about impossible. 
> > > 
> > > Brian, I can send you what I have but it's really just a bag of
> > > bolts at this point because I was never able to validate that any of
> > > the patches made a measurable improvement to reclaim behaviour under
> > > any workload I ran.....
> > > 
> > 
> > Sure, I'm curious to see what direction this goes in. I would think
> > anything that provides a backoff to other shrinkers would help this
> > particular workload where many different filesystems are active. FWIW,
> > I'd probably also need more details about what workloads you're testing
> > and how you're measuring improvements and whatnot to try and take any of
> > that stuff any farther (particularly how you verify the problems with
> > dropping blocking behavior entirely), though..
> 
> Ok, I'll send you a copy...
> 

Thanks.

> > > FWIW, the major problem with removing the blocking in inode reclaim
> > > is the ease with which you can then trigger the OOM killer from
> > > userspace.  The high level memory reclaim algorithms break down when
> > > there are hundreds of direct reclaim processes hammering on reclaim
> > > and reclaim stops making progress because it's skipping dirty
> > > objects.  Direct reclaim ends up insufficiently throttled, so rather
> > > than blocking it winds up reclaim priority and then declares OOM
> > > because reclaim runs out of retries before sufficient memory has
> > > been freed.
> > > 
> > 
> > I'd need to spend some time in the shrinker code to grok this, but if
> > there's such a priority, would switching blocking behavior based on
> > priority provide a way to mitigate this problem from within the
> > shrinker? For example, provide non-blocking behavior on the lowest
> > priority to kick off flushing and allow progress into other shrinkers,
> > otherwise we flush and wait if the priority is elevated..?
> 
> I tried that - exporting the priority to the shrink_control and so
> on. The system either behaved the same (i.e. stalled on reclaim) or
> randomly fell into a screaming pile of OOM killer rage-death. There
> was no in-between reliable state...
> 

Hmm, Ok, well at least I'm not completely off the rails then. :)

> > IOW, it sounds like the problem in this case is that we subject the rest
> > of the allocation infrastructure to delays in configurations where we
> > are one of N potential shrinkers with reclaimable objects, because we
> > have to deal with this situation where our one shrinker actually is the
> > main/primary choke point for multiple allocator -> direct reclaimers.
> 
> Yup. The mm reclaim design sucks ass because if we use purely
> non-blocking reclaim techniques in the shrinkers we have no way of
> throttling allocation demand to the rate at which shrinkers can
> reclaim objects and we end up with gross cache imbalances under
> memory pressure.
> 
> > I'm wondering if some kind of severity parameter managed by the shrinker
> > infra would help us distinguish between those scenarios (even if it were
> > a dumb LOW/HIGH priority param, where LOW allows for one pass through
> > all of the shrinkers to kick off I/O and whatnot before any one of them
> > should actually block on locks or I/O). Then again, I'm just handwaving
> > as I'm only just familiarizing with the context and problem.
> 
> Tried that, too. :( Several different ways over the past few years.
> Remember that we also have the non-blocking background reclaim
> thread - even triggering that immediately on reclaim doesn't
> prevent stalls, and my last patchset specifically excluded it
> from concurrency control so it always ran immediately on reclaim
> 
> Hmmm - I just had a thought.
> 
> Perhaps all we need to do is remove direct reclaim from the
> shrinker. i.e. make the background reclaimer run on a ticket based
> queuing system similar to the log reservations...
> 
> For non-blocking reclaim (i.e.  kswapd), we just queue the ticket
> and return immediately, saying we've done the scan. 
> 
> For blocking reclaim, we take a ticket with the blocking scan count
> on it and wait in the wakeup queue.  As background reclaim runs, it
> decrements the lead ticket scan count. When it hits zero, wake up
> the head of the wait queue. When the background thread has no
> pending reclaim tickets or has nothing left to scan/reclaim (i.e.
> goes idle), wake everyone....
> 
> This means waiting is ordered, it doesn't matter where in the fs we
> reclaim from, direct reclaim itself doesn't get stuck waiting for
> IO/transactions/log flushing, concurrency is managed by how we
> process the ticket queue, and when we have lots of direct reclaim
> demand then the background thread just keeps running as optimally as
> possible and kswapd is never blocked...
> 

Interesting.. IIUC, what this adds is smarter waiting logic by mapping
actual calling behavior into the shrinker to underlying reclaim
progress, rather than relying on whatever higher level priority is
calculated in the shrinker mechanism..? Sounds like a neat idea...

Brian

> > (I also see no priority in struct shrink_control, so I guess that's an
> > internal reclaim thing as it is.)
> 
> Yup, I had to promote it all the way through from the struct
> scan_control -> priority field.
> 
> Cheers,
> 
> Dave.
> 
> -- 
> Dave Chinner
> david@fromorbit.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2017-02-18 13:07 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <bug-192981-27@https.bugzilla.kernel.org/>
2017-01-23 21:51 ` [Bug 192981] New: page allocation stalls Andrew Morton
2017-01-30 15:11   ` Alexander Polakov
2017-02-01 15:27   ` Michal Hocko
2017-02-15 12:56   ` Alexander Polakov
2017-02-15 12:56     ` Alexander Polakov
2017-02-15 16:05     ` Brian Foster
2017-02-15 16:05       ` Brian Foster
2017-02-15 16:52       ` Alexander Polakov
2017-02-15 16:52         ` Alexander Polakov
2017-02-15 18:09         ` Brian Foster
2017-02-15 18:09           ` Brian Foster
2017-02-16 10:56           ` Alexander Polakov
2017-02-16 10:56             ` Alexander Polakov
2017-02-16 17:20             ` Brian Foster
2017-02-16 17:20               ` Brian Foster
2017-02-16 22:21               ` Dave Chinner
2017-02-16 22:21                 ` Dave Chinner
2017-02-17 11:11                 ` Tetsuo Handa
2017-02-17 11:11                   ` Tetsuo Handa
2017-02-17 23:58                   ` Dave Chinner
2017-02-17 23:58                     ` Dave Chinner
2017-02-17 19:05                 ` Brian Foster
2017-02-17 19:05                   ` Brian Foster
2017-02-17 23:52                   ` Dave Chinner
2017-02-17 23:52                     ` Dave Chinner
2017-02-18 13:05                     ` Brian Foster
2017-02-18 13:05                       ` Brian Foster

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.