All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/4] use up highorder free pages before OOM
@ 2016-10-12  5:33 ` Minchan Kim
  0 siblings, 0 replies; 20+ messages in thread
From: Minchan Kim @ 2016-10-12  5:33 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee, Michal Hocko, Minchan Kim

I got OOM report from production team with v4.4 kernel.
It had enough free memory but failed to allocate GFP_KERNEL order-0
page and finally encountered OOM kill. It occured during QA process
which launches several apps, switching and so on. It happned rarely.
IOW, In normal situation, it was not a problem but if we are unluck
so that several apps uses peak memory at the same time, it can happen.
If we manage to pass the phase, the system can go working well.

I could reproduce it with my test(memory spike easily. Look at below.

The reason is free pages(19M) of DMA32 zone are reserved for
HIGHORDERATOMIC and doesn't unreserved before the OOM.

balloon invoked oom-killer: gfp_mask=0x24280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), order=0, oom_score_adj=0
balloon cpuset=/ mems_allowed=0
CPU: 1 PID: 8473 Comm: balloon Tainted: G        W  OE   4.8.0-rc7-00219-g3f74c9559583-dirty #3161
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
 0000000000000000 ffff88007f15bbc8 ffffffff8138eb13 ffff88007f15bd88
 ffff88005a72a4c0 ffff88007f15bc28 ffffffff811d2d13 ffff88007f15bc08
 ffffffff8146a5ca ffffffff81c8df60 0000000000000015 0000000000000206
Call Trace:
 [<ffffffff8138eb13>] dump_stack+0x63/0x90
 [<ffffffff811d2d13>] dump_header+0x5c/0x1ce
 [<ffffffff8146a5ca>] ? virtballoon_oom_notify+0x2a/0x80
 [<ffffffff81171e5e>] oom_kill_process+0x22e/0x400
 [<ffffffff8117222c>] out_of_memory+0x1ac/0x210
 [<ffffffff811775ce>] __alloc_pages_nodemask+0x101e/0x1040
 [<ffffffff811a245a>] handle_mm_fault+0xa0a/0xbf0
 [<ffffffff8106029d>] __do_page_fault+0x1dd/0x4d0
 [<ffffffff81060653>] trace_do_page_fault+0x43/0x130
 [<ffffffff81059bda>] do_async_page_fault+0x1a/0xa0
 [<ffffffff817a3f38>] async_page_fault+0x28/0x30
Mem-Info:
active_anon:383949 inactive_anon:106724 isolated_anon:0
 active_file:15 inactive_file:44 isolated_file:0
 unevictable:0 dirty:0 writeback:24 unstable:0
 slab_reclaimable:2483 slab_unreclaimable:3326
 mapped:0 shmem:0 pagetables:1906 bounce:0
 free:6898 free_pcp:291 free_cma:0
Node 0 active_anon:1535796kB inactive_anon:426896kB active_file:60kB inactive_file:176kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:0kB dirty:0kB writeback:96kB shmem:0kB writeback_tmp:0kB unstable:0kB pages_scanned:1418 all_unreclaimable? no
DMA free:8188kB min:44kB low:56kB high:68kB active_anon:7648kB inactive_anon:0kB active_file:0kB inactive_file:4kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:20kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
lowmem_reserve[]: 0 1952 1952 1952
DMA32 free:19404kB min:5628kB low:7624kB high:9620kB active_anon:1528148kB inactive_anon:426896kB active_file:60kB inactive_file:420kB unevictable:0kB writepending:96kB present:2080640kB managed:2030092kB mlocked:0kB slab_reclaimable:9932kB slab_unreclaimable:13284kB kernel_stack:2496kB pagetables:7624kB bounce:0kB free_pcp:900kB local_pcp:112kB free_cma:0kB
lowmem_reserve[]: 0 0 0 0
DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 2*4096kB (H) = 8192kB
DMA32: 7*4kB (H) 8*8kB (H) 30*16kB (H) 31*32kB (H) 14*64kB (H) 9*128kB (H) 2*256kB (H) 2*512kB (H) 4*1024kB (H) 5*2048kB (H) 0*4096kB = 19484kB
51131 total pagecache pages
50795 pages in swap cache
Swap cache stats: add 3532405601, delete 3532354806, find 124289150/1822712228
Free swap  = 8kB
Total swap = 255996kB
524158 pages RAM
0 pages HighMem/MovableOnly
12658 pages reserved
0 pages cma reserved
0 pages hwpoisoned

Another example exceeded the limit by the race is

in:imklog: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
CPU: 0 PID: 476 Comm: in:imklog Tainted: G            E   4.8.0-rc7-00217-g266ef83c51e5-dirty #3135
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
 0000000000000000 ffff880077c37590 ffffffff81389033 0000000000000000
 0000000000000000 ffff880077c37618 ffffffff8117519b 0228002000000000
 ffffffffffffffff ffffffff81cedb40 0000000000000000 0000000000000040
Call Trace:
 [<ffffffff81389033>] dump_stack+0x63/0x90
 [<ffffffff8117519b>] warn_alloc_failed+0xdb/0x130
 [<ffffffff81175746>] __alloc_pages_nodemask+0x4d6/0xdb0
 [<ffffffff8120c149>] ? bdev_write_page+0xa9/0xd0
 [<ffffffff811a97b3>] ? __page_check_address+0xd3/0x130
 [<ffffffff811ba4ea>] ? deactivate_slab+0x12a/0x3e0
 [<ffffffff811b9549>] new_slab+0x339/0x490
 [<ffffffff811bad37>] ___slab_alloc.constprop.74+0x367/0x480
 [<ffffffff814601ad>] ? alloc_indirect.isra.14+0x1d/0x50
 [<ffffffff8109d0c2>] ? default_wake_function+0x12/0x20
 [<ffffffff811bae70>] __slab_alloc.constprop.73+0x20/0x40
 [<ffffffff811bb034>] __kmalloc+0x1a4/0x1e0
 [<ffffffff814601ad>] alloc_indirect.isra.14+0x1d/0x50
 [<ffffffff81460434>] virtqueue_add_sgs+0x1c4/0x470
 [<ffffffff81365075>] ? __bt_get.isra.8+0xe5/0x1c0
 [<ffffffff8150973e>] __virtblk_add_req+0xae/0x1f0
 [<ffffffff810b37d0>] ? wake_atomic_t_function+0x60/0x60
 [<ffffffff810337b9>] ? sched_clock+0x9/0x10
 [<ffffffff81360afb>] ? __blk_mq_alloc_request+0x10b/0x230
 [<ffffffff8135e293>] ? blk_rq_map_sg+0x213/0x550
 [<ffffffff81509a1d>] virtio_queue_rq+0x12d/0x290
 [<ffffffff813629c9>] __blk_mq_run_hw_queue+0x239/0x370
 [<ffffffff8136276f>] blk_mq_run_hw_queue+0x8f/0xb0
 [<ffffffff8136397c>] blk_mq_insert_requests+0x18c/0x1a0
 [<ffffffff81364865>] blk_mq_flush_plug_list+0x125/0x140
 [<ffffffff813596a7>] blk_flush_plug_list+0xc7/0x220
 [<ffffffff81359bec>] blk_finish_plug+0x2c/0x40
 [<ffffffff8117b836>] __do_page_cache_readahead+0x196/0x230
 [<ffffffffa00006ba>] ? zram_free_page+0x3a/0xb0 [zram]
 [<ffffffff8116f928>] filemap_fault+0x448/0x4f0
 [<ffffffff8119e9e4>] ? alloc_set_pte+0xe4/0x350
 [<ffffffff8125fa16>] ext4_filemap_fault+0x36/0x50
 [<ffffffff8119be35>] __do_fault+0x75/0x140
 [<ffffffff8119f6cd>] handle_mm_fault+0x84d/0xbe0
 [<ffffffff812483e4>] ? kmsg_read+0x44/0x60
 [<ffffffff8106029d>] __do_page_fault+0x1dd/0x4d0
 [<ffffffff81060653>] trace_do_page_fault+0x43/0x130
 [<ffffffff81059bda>] do_async_page_fault+0x1a/0xa0
 [<ffffffff8179dcb8>] async_page_fault+0x28/0x30
Mem-Info:
active_anon:363826 inactive_anon:121283 isolated_anon:32
 active_file:65 inactive_file:152 isolated_file:0
 unevictable:0 dirty:0 writeback:46 unstable:0
 slab_reclaimable:2778 slab_unreclaimable:3070
 mapped:112 shmem:0 pagetables:1822 bounce:0
 free:9469 free_pcp:231 free_cma:0
Node 0 active_anon:1455304kB inactive_anon:485132kB active_file:260kB inactive_file:608kB unevictable:0kB isolated(anon):128kB isolated(file):0kB mapped:448kB dirty:0kB writeback:184kB shmem:0kB writeback_tmp:0kB unstable:0kB pages_scanned:13641 all_unreclaimable? no
DMA free:7748kB min:44kB low:56kB high:68kB active_anon:7944kB inactive_anon:104kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:108kB kernel_stack:0kB pagetables:4kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
lowmem_reserve[]: 0 1952 1952 1952
DMA32 free:30128kB min:5628kB low:7624kB high:9620kB active_anon:1447360kB inactive_anon:485028kB active_file:260kB inactive_file:608kB unevictable:0kB writepending:184kB present:2080640kB managed:2030132kB mlocked:0kB slab_reclaimable:11112kB slab_unreclaimable:12172kB kernel_stack:2400kB pagetables:7284kB bounce:0kB free_pcp:924kB local_pcp:72kB free_cma:0kB
lowmem_reserve[]: 0 0 0 0
DMA: 7*4kB (UE) 3*8kB (UH) 1*16kB (M) 0*32kB 2*64kB (U) 1*128kB (M) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 1*4096kB (H) = 7748kB
DMA32: 10*4kB (H) 3*8kB (H) 47*16kB (H) 38*32kB (H) 5*64kB (H) 1*128kB (H) 2*256kB (H) 3*512kB (H) 3*1024kB (H) 3*2048kB (H) 4*4096kB (H) = 30128kB
2775 total pagecache pages
2536 pages in swap cache
Swap cache stats: add 206786828, delete 206784292, find 7323106/106686077
Free swap  = 108744kB
Total swap = 255996kB
524158 pages RAM
0 pages HighMem/MovableOnly
12648 pages reserved
0 pages cma reserved
0 pages hwpoisoned

During the investigation, I found some problems with highatomic so
this patch aims to solve the problems and the final goal is to
unreserve every highatomic free pages before the OOM kill.

Minchan Kim (4):
  mm: don't steal highatomic pageblock
  mm: prevent double decrease of nr_reserved_highatomic
  mm: try to exhaust highatomic reserve before the OOM
  mm: make unreserve highatomic functions reliable

 mm/page_alloc.c | 63 ++++++++++++++++++++++++++++++++++++++++++---------------
 1 file changed, 47 insertions(+), 16 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v2 0/4] use up highorder free pages before OOM
@ 2016-10-12  5:33 ` Minchan Kim
  0 siblings, 0 replies; 20+ messages in thread
From: Minchan Kim @ 2016-10-12  5:33 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee, Michal Hocko, Minchan Kim

I got OOM report from production team with v4.4 kernel.
It had enough free memory but failed to allocate GFP_KERNEL order-0
page and finally encountered OOM kill. It occured during QA process
which launches several apps, switching and so on. It happned rarely.
IOW, In normal situation, it was not a problem but if we are unluck
so that several apps uses peak memory at the same time, it can happen.
If we manage to pass the phase, the system can go working well.

I could reproduce it with my test(memory spike easily. Look at below.

The reason is free pages(19M) of DMA32 zone are reserved for
HIGHORDERATOMIC and doesn't unreserved before the OOM.

balloon invoked oom-killer: gfp_mask=0x24280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), order=0, oom_score_adj=0
balloon cpuset=/ mems_allowed=0
CPU: 1 PID: 8473 Comm: balloon Tainted: G        W  OE   4.8.0-rc7-00219-g3f74c9559583-dirty #3161
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
 0000000000000000 ffff88007f15bbc8 ffffffff8138eb13 ffff88007f15bd88
 ffff88005a72a4c0 ffff88007f15bc28 ffffffff811d2d13 ffff88007f15bc08
 ffffffff8146a5ca ffffffff81c8df60 0000000000000015 0000000000000206
Call Trace:
 [<ffffffff8138eb13>] dump_stack+0x63/0x90
 [<ffffffff811d2d13>] dump_header+0x5c/0x1ce
 [<ffffffff8146a5ca>] ? virtballoon_oom_notify+0x2a/0x80
 [<ffffffff81171e5e>] oom_kill_process+0x22e/0x400
 [<ffffffff8117222c>] out_of_memory+0x1ac/0x210
 [<ffffffff811775ce>] __alloc_pages_nodemask+0x101e/0x1040
 [<ffffffff811a245a>] handle_mm_fault+0xa0a/0xbf0
 [<ffffffff8106029d>] __do_page_fault+0x1dd/0x4d0
 [<ffffffff81060653>] trace_do_page_fault+0x43/0x130
 [<ffffffff81059bda>] do_async_page_fault+0x1a/0xa0
 [<ffffffff817a3f38>] async_page_fault+0x28/0x30
Mem-Info:
active_anon:383949 inactive_anon:106724 isolated_anon:0
 active_file:15 inactive_file:44 isolated_file:0
 unevictable:0 dirty:0 writeback:24 unstable:0
 slab_reclaimable:2483 slab_unreclaimable:3326
 mapped:0 shmem:0 pagetables:1906 bounce:0
 free:6898 free_pcp:291 free_cma:0
Node 0 active_anon:1535796kB inactive_anon:426896kB active_file:60kB inactive_file:176kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:0kB dirty:0kB writeback:96kB shmem:0kB writeback_tmp:0kB unstable:0kB pages_scanned:1418 all_unreclaimable? no
DMA free:8188kB min:44kB low:56kB high:68kB active_anon:7648kB inactive_anon:0kB active_file:0kB inactive_file:4kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:20kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
lowmem_reserve[]: 0 1952 1952 1952
DMA32 free:19404kB min:5628kB low:7624kB high:9620kB active_anon:1528148kB inactive_anon:426896kB active_file:60kB inactive_file:420kB unevictable:0kB writepending:96kB present:2080640kB managed:2030092kB mlocked:0kB slab_reclaimable:9932kB slab_unreclaimable:13284kB kernel_stack:2496kB pagetables:7624kB bounce:0kB free_pcp:900kB local_pcp:112kB free_cma:0kB
lowmem_reserve[]: 0 0 0 0
DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 2*4096kB (H) = 8192kB
DMA32: 7*4kB (H) 8*8kB (H) 30*16kB (H) 31*32kB (H) 14*64kB (H) 9*128kB (H) 2*256kB (H) 2*512kB (H) 4*1024kB (H) 5*2048kB (H) 0*4096kB = 19484kB
51131 total pagecache pages
50795 pages in swap cache
Swap cache stats: add 3532405601, delete 3532354806, find 124289150/1822712228
Free swap  = 8kB
Total swap = 255996kB
524158 pages RAM
0 pages HighMem/MovableOnly
12658 pages reserved
0 pages cma reserved
0 pages hwpoisoned

Another example exceeded the limit by the race is

in:imklog: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
CPU: 0 PID: 476 Comm: in:imklog Tainted: G            E   4.8.0-rc7-00217-g266ef83c51e5-dirty #3135
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
 0000000000000000 ffff880077c37590 ffffffff81389033 0000000000000000
 0000000000000000 ffff880077c37618 ffffffff8117519b 0228002000000000
 ffffffffffffffff ffffffff81cedb40 0000000000000000 0000000000000040
Call Trace:
 [<ffffffff81389033>] dump_stack+0x63/0x90
 [<ffffffff8117519b>] warn_alloc_failed+0xdb/0x130
 [<ffffffff81175746>] __alloc_pages_nodemask+0x4d6/0xdb0
 [<ffffffff8120c149>] ? bdev_write_page+0xa9/0xd0
 [<ffffffff811a97b3>] ? __page_check_address+0xd3/0x130
 [<ffffffff811ba4ea>] ? deactivate_slab+0x12a/0x3e0
 [<ffffffff811b9549>] new_slab+0x339/0x490
 [<ffffffff811bad37>] ___slab_alloc.constprop.74+0x367/0x480
 [<ffffffff814601ad>] ? alloc_indirect.isra.14+0x1d/0x50
 [<ffffffff8109d0c2>] ? default_wake_function+0x12/0x20
 [<ffffffff811bae70>] __slab_alloc.constprop.73+0x20/0x40
 [<ffffffff811bb034>] __kmalloc+0x1a4/0x1e0
 [<ffffffff814601ad>] alloc_indirect.isra.14+0x1d/0x50
 [<ffffffff81460434>] virtqueue_add_sgs+0x1c4/0x470
 [<ffffffff81365075>] ? __bt_get.isra.8+0xe5/0x1c0
 [<ffffffff8150973e>] __virtblk_add_req+0xae/0x1f0
 [<ffffffff810b37d0>] ? wake_atomic_t_function+0x60/0x60
 [<ffffffff810337b9>] ? sched_clock+0x9/0x10
 [<ffffffff81360afb>] ? __blk_mq_alloc_request+0x10b/0x230
 [<ffffffff8135e293>] ? blk_rq_map_sg+0x213/0x550
 [<ffffffff81509a1d>] virtio_queue_rq+0x12d/0x290
 [<ffffffff813629c9>] __blk_mq_run_hw_queue+0x239/0x370
 [<ffffffff8136276f>] blk_mq_run_hw_queue+0x8f/0xb0
 [<ffffffff8136397c>] blk_mq_insert_requests+0x18c/0x1a0
 [<ffffffff81364865>] blk_mq_flush_plug_list+0x125/0x140
 [<ffffffff813596a7>] blk_flush_plug_list+0xc7/0x220
 [<ffffffff81359bec>] blk_finish_plug+0x2c/0x40
 [<ffffffff8117b836>] __do_page_cache_readahead+0x196/0x230
 [<ffffffffa00006ba>] ? zram_free_page+0x3a/0xb0 [zram]
 [<ffffffff8116f928>] filemap_fault+0x448/0x4f0
 [<ffffffff8119e9e4>] ? alloc_set_pte+0xe4/0x350
 [<ffffffff8125fa16>] ext4_filemap_fault+0x36/0x50
 [<ffffffff8119be35>] __do_fault+0x75/0x140
 [<ffffffff8119f6cd>] handle_mm_fault+0x84d/0xbe0
 [<ffffffff812483e4>] ? kmsg_read+0x44/0x60
 [<ffffffff8106029d>] __do_page_fault+0x1dd/0x4d0
 [<ffffffff81060653>] trace_do_page_fault+0x43/0x130
 [<ffffffff81059bda>] do_async_page_fault+0x1a/0xa0
 [<ffffffff8179dcb8>] async_page_fault+0x28/0x30
Mem-Info:
active_anon:363826 inactive_anon:121283 isolated_anon:32
 active_file:65 inactive_file:152 isolated_file:0
 unevictable:0 dirty:0 writeback:46 unstable:0
 slab_reclaimable:2778 slab_unreclaimable:3070
 mapped:112 shmem:0 pagetables:1822 bounce:0
 free:9469 free_pcp:231 free_cma:0
Node 0 active_anon:1455304kB inactive_anon:485132kB active_file:260kB inactive_file:608kB unevictable:0kB isolated(anon):128kB isolated(file):0kB mapped:448kB dirty:0kB writeback:184kB shmem:0kB writeback_tmp:0kB unstable:0kB pages_scanned:13641 all_unreclaimable? no
DMA free:7748kB min:44kB low:56kB high:68kB active_anon:7944kB inactive_anon:104kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:108kB kernel_stack:0kB pagetables:4kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
lowmem_reserve[]: 0 1952 1952 1952
DMA32 free:30128kB min:5628kB low:7624kB high:9620kB active_anon:1447360kB inactive_anon:485028kB active_file:260kB inactive_file:608kB unevictable:0kB writepending:184kB present:2080640kB managed:2030132kB mlocked:0kB slab_reclaimable:11112kB slab_unreclaimable:12172kB kernel_stack:2400kB pagetables:7284kB bounce:0kB free_pcp:924kB local_pcp:72kB free_cma:0kB
lowmem_reserve[]: 0 0 0 0
DMA: 7*4kB (UE) 3*8kB (UH) 1*16kB (M) 0*32kB 2*64kB (U) 1*128kB (M) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 1*4096kB (H) = 7748kB
DMA32: 10*4kB (H) 3*8kB (H) 47*16kB (H) 38*32kB (H) 5*64kB (H) 1*128kB (H) 2*256kB (H) 3*512kB (H) 3*1024kB (H) 3*2048kB (H) 4*4096kB (H) = 30128kB
2775 total pagecache pages
2536 pages in swap cache
Swap cache stats: add 206786828, delete 206784292, find 7323106/106686077
Free swap  = 108744kB
Total swap = 255996kB
524158 pages RAM
0 pages HighMem/MovableOnly
12648 pages reserved
0 pages cma reserved
0 pages hwpoisoned

During the investigation, I found some problems with highatomic so
this patch aims to solve the problems and the final goal is to
unreserve every highatomic free pages before the OOM kill.

Minchan Kim (4):
  mm: don't steal highatomic pageblock
  mm: prevent double decrease of nr_reserved_highatomic
  mm: try to exhaust highatomic reserve before the OOM
  mm: make unreserve highatomic functions reliable

 mm/page_alloc.c | 63 ++++++++++++++++++++++++++++++++++++++++++---------------
 1 file changed, 47 insertions(+), 16 deletions(-)

-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v2 1/4] mm: don't steal highatomic pageblock
  2016-10-12  5:33 ` Minchan Kim
@ 2016-10-12  5:33   ` Minchan Kim
  -1 siblings, 0 replies; 20+ messages in thread
From: Minchan Kim @ 2016-10-12  5:33 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee, Michal Hocko, Minchan Kim

In page freeing path, migratetype is racy so that a highorderatomic
page could free into non-highorderatomic free list. If that page
is allocated, VM can change the pageblock from higorderatomic to
something. In that case, highatomic pageblock accounting is broken
so it doesn't work(e.g., VM cannot reserve highorderatomic pageblocks
any more although it doesn't reach 1% limit).

So, this patch prohibits the changing from highatomic to other type.
It's no problem because MIGRATE_HIGHATOMIC is not listed in fallback
array so stealing will only happen due to unexpected races which is
really rare. Also, such prohibiting keeps highatomic pageblock more
longer so it would be better for highorderatomic page allocation.

Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/page_alloc.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 55ad0229ebf3..79853b258211 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2154,7 +2154,8 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
 
 		page = list_first_entry(&area->free_list[fallback_mt],
 						struct page, lru);
-		if (can_steal)
+		if (can_steal &&
+			get_pageblock_migratetype(page) != MIGRATE_HIGHATOMIC)
 			steal_suitable_fallback(zone, page, start_migratetype);
 
 		/* Remove the page from the freelists */
@@ -2555,7 +2556,8 @@ int __isolate_free_page(struct page *page, unsigned int order)
 		struct page *endpage = page + (1 << order) - 1;
 		for (; page < endpage; page += pageblock_nr_pages) {
 			int mt = get_pageblock_migratetype(page);
-			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt))
+			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt)
+				&& mt != MIGRATE_HIGHATOMIC)
 				set_pageblock_migratetype(page,
 							  MIGRATE_MOVABLE);
 		}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 1/4] mm: don't steal highatomic pageblock
@ 2016-10-12  5:33   ` Minchan Kim
  0 siblings, 0 replies; 20+ messages in thread
From: Minchan Kim @ 2016-10-12  5:33 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee, Michal Hocko, Minchan Kim

In page freeing path, migratetype is racy so that a highorderatomic
page could free into non-highorderatomic free list. If that page
is allocated, VM can change the pageblock from higorderatomic to
something. In that case, highatomic pageblock accounting is broken
so it doesn't work(e.g., VM cannot reserve highorderatomic pageblocks
any more although it doesn't reach 1% limit).

So, this patch prohibits the changing from highatomic to other type.
It's no problem because MIGRATE_HIGHATOMIC is not listed in fallback
array so stealing will only happen due to unexpected races which is
really rare. Also, such prohibiting keeps highatomic pageblock more
longer so it would be better for highorderatomic page allocation.

Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/page_alloc.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 55ad0229ebf3..79853b258211 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2154,7 +2154,8 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
 
 		page = list_first_entry(&area->free_list[fallback_mt],
 						struct page, lru);
-		if (can_steal)
+		if (can_steal &&
+			get_pageblock_migratetype(page) != MIGRATE_HIGHATOMIC)
 			steal_suitable_fallback(zone, page, start_migratetype);
 
 		/* Remove the page from the freelists */
@@ -2555,7 +2556,8 @@ int __isolate_free_page(struct page *page, unsigned int order)
 		struct page *endpage = page + (1 << order) - 1;
 		for (; page < endpage; page += pageblock_nr_pages) {
 			int mt = get_pageblock_migratetype(page);
-			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt))
+			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt)
+				&& mt != MIGRATE_HIGHATOMIC)
 				set_pageblock_migratetype(page,
 							  MIGRATE_MOVABLE);
 		}
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 2/4] mm: prevent double decrease of nr_reserved_highatomic
  2016-10-12  5:33 ` Minchan Kim
@ 2016-10-12  5:33   ` Minchan Kim
  -1 siblings, 0 replies; 20+ messages in thread
From: Minchan Kim @ 2016-10-12  5:33 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee, Michal Hocko, Minchan Kim

There is race between page freeing and unreserved highatomic.

 CPU 0				    CPU 1

    free_hot_cold_page
      mt = get_pfnblock_migratetype
      set_pcppage_migratetype(page, mt)
    				    unreserve_highatomic_pageblock
    				    spin_lock_irqsave(&zone->lock)
    				    move_freepages_block
    				    set_pageblock_migratetype(page)
    				    spin_unlock_irqrestore(&zone->lock)
      free_pcppages_bulk
        __free_one_page(mt) <- mt is stale

By above race, a page on CPU 0 could go non-highorderatomic free list
since the pageblock's type is changed. By that, unreserve logic of
highorderatomic can decrease reserved count on a same pageblock
severak times and then it will make mismatch between
nr_reserved_highatomic and the number of reserved pageblock.

So, this patch verifies whether the pageblock is highatomic or not
and decrease the count only if the pageblock is highatomic.

Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/page_alloc.c | 24 ++++++++++++++++++------
 1 file changed, 18 insertions(+), 6 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 79853b258211..18808f392718 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2106,13 +2106,25 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 				continue;
 
 			/*
-			 * It should never happen but changes to locking could
-			 * inadvertently allow a per-cpu drain to add pages
-			 * to MIGRATE_HIGHATOMIC while unreserving so be safe
-			 * and watch for underflows.
+			 * In page freeing path, migratetype change is racy so
+			 * we can counter several free pages in a pageblock
+			 * in this loop althoug we changed the pageblock type
+			 * from highatomic to ac->migratetype. So we should
+			 * adjust the count once.
 			 */
-			zone->nr_reserved_highatomic -= min(pageblock_nr_pages,
-				zone->nr_reserved_highatomic);
+			if (get_pageblock_migratetype(page) ==
+							MIGRATE_HIGHATOMIC) {
+				/*
+				 * It should never happen but changes to
+				 * locking could inadvertently allow a per-cpu
+				 * drain to add pages to MIGRATE_HIGHATOMIC
+				 * while unreserving so be safe and watch for
+				 * underflows.
+				 */
+				zone->nr_reserved_highatomic -= min(
+						pageblock_nr_pages,
+						zone->nr_reserved_highatomic);
+			}
 
 			/*
 			 * Convert to ac->migratetype and avoid the normal
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 2/4] mm: prevent double decrease of nr_reserved_highatomic
@ 2016-10-12  5:33   ` Minchan Kim
  0 siblings, 0 replies; 20+ messages in thread
From: Minchan Kim @ 2016-10-12  5:33 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee, Michal Hocko, Minchan Kim

There is race between page freeing and unreserved highatomic.

 CPU 0				    CPU 1

    free_hot_cold_page
      mt = get_pfnblock_migratetype
      set_pcppage_migratetype(page, mt)
    				    unreserve_highatomic_pageblock
    				    spin_lock_irqsave(&zone->lock)
    				    move_freepages_block
    				    set_pageblock_migratetype(page)
    				    spin_unlock_irqrestore(&zone->lock)
      free_pcppages_bulk
        __free_one_page(mt) <- mt is stale

By above race, a page on CPU 0 could go non-highorderatomic free list
since the pageblock's type is changed. By that, unreserve logic of
highorderatomic can decrease reserved count on a same pageblock
severak times and then it will make mismatch between
nr_reserved_highatomic and the number of reserved pageblock.

So, this patch verifies whether the pageblock is highatomic or not
and decrease the count only if the pageblock is highatomic.

Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/page_alloc.c | 24 ++++++++++++++++++------
 1 file changed, 18 insertions(+), 6 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 79853b258211..18808f392718 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2106,13 +2106,25 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 				continue;
 
 			/*
-			 * It should never happen but changes to locking could
-			 * inadvertently allow a per-cpu drain to add pages
-			 * to MIGRATE_HIGHATOMIC while unreserving so be safe
-			 * and watch for underflows.
+			 * In page freeing path, migratetype change is racy so
+			 * we can counter several free pages in a pageblock
+			 * in this loop althoug we changed the pageblock type
+			 * from highatomic to ac->migratetype. So we should
+			 * adjust the count once.
 			 */
-			zone->nr_reserved_highatomic -= min(pageblock_nr_pages,
-				zone->nr_reserved_highatomic);
+			if (get_pageblock_migratetype(page) ==
+							MIGRATE_HIGHATOMIC) {
+				/*
+				 * It should never happen but changes to
+				 * locking could inadvertently allow a per-cpu
+				 * drain to add pages to MIGRATE_HIGHATOMIC
+				 * while unreserving so be safe and watch for
+				 * underflows.
+				 */
+				zone->nr_reserved_highatomic -= min(
+						pageblock_nr_pages,
+						zone->nr_reserved_highatomic);
+			}
 
 			/*
 			 * Convert to ac->migratetype and avoid the normal
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 3/4] mm: try to exhaust highatomic reserve before the OOM
  2016-10-12  5:33 ` Minchan Kim
@ 2016-10-12  5:33   ` Minchan Kim
  -1 siblings, 0 replies; 20+ messages in thread
From: Minchan Kim @ 2016-10-12  5:33 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee, Michal Hocko, Minchan Kim

It's weird to show that zone has enough free memory above min
watermark but OOMed with 4K GFP_KERNEL allocation due to
reserved highatomic pages. As last resort, try to unreserve
highatomic pages again and if it has moved pages to
non-highatmoc free list, retry reclaim once more.

Signed-off-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/page_alloc.c | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 18808f392718..a7472426663f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2080,7 +2080,7 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
  * intense memory pressure but failed atomic allocations should be easier
  * to recover from than an OOM.
  */
-static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
+static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
 {
 	struct zonelist *zonelist = ac->zonelist;
 	unsigned long flags;
@@ -2088,6 +2088,7 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 	struct zone *zone;
 	struct page *page;
 	int order;
+	bool ret = false;
 
 	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
 								ac->nodemask) {
@@ -2136,12 +2137,14 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 			 * may increase.
 			 */
 			set_pageblock_migratetype(page, ac->migratetype);
-			move_freepages_block(zone, page, ac->migratetype);
+			ret = move_freepages_block(zone, page, ac->migratetype);
 			spin_unlock_irqrestore(&zone->lock, flags);
-			return;
+			return ret;
 		}
 		spin_unlock_irqrestore(&zone->lock, flags);
 	}
+
+	return ret;
 }
 
 /* Remove an element from the buddy allocator from the fallback list */
@@ -3457,8 +3460,12 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 	 * Make sure we converge to OOM if we cannot make any progress
 	 * several times in the row.
 	 */
-	if (*no_progress_loops > MAX_RECLAIM_RETRIES)
+	if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
+		/* Before OOM, exhaust highatomic_reserve */
+		if (unreserve_highatomic_pageblock(ac))
+			return true;
 		return false;
+	}
 
 	/*
 	 * Keep reclaiming pages while there is a chance this will lead
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 3/4] mm: try to exhaust highatomic reserve before the OOM
@ 2016-10-12  5:33   ` Minchan Kim
  0 siblings, 0 replies; 20+ messages in thread
From: Minchan Kim @ 2016-10-12  5:33 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee, Michal Hocko, Minchan Kim

It's weird to show that zone has enough free memory above min
watermark but OOMed with 4K GFP_KERNEL allocation due to
reserved highatomic pages. As last resort, try to unreserve
highatomic pages again and if it has moved pages to
non-highatmoc free list, retry reclaim once more.

Signed-off-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/page_alloc.c | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 18808f392718..a7472426663f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2080,7 +2080,7 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
  * intense memory pressure but failed atomic allocations should be easier
  * to recover from than an OOM.
  */
-static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
+static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
 {
 	struct zonelist *zonelist = ac->zonelist;
 	unsigned long flags;
@@ -2088,6 +2088,7 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 	struct zone *zone;
 	struct page *page;
 	int order;
+	bool ret = false;
 
 	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
 								ac->nodemask) {
@@ -2136,12 +2137,14 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 			 * may increase.
 			 */
 			set_pageblock_migratetype(page, ac->migratetype);
-			move_freepages_block(zone, page, ac->migratetype);
+			ret = move_freepages_block(zone, page, ac->migratetype);
 			spin_unlock_irqrestore(&zone->lock, flags);
-			return;
+			return ret;
 		}
 		spin_unlock_irqrestore(&zone->lock, flags);
 	}
+
+	return ret;
 }
 
 /* Remove an element from the buddy allocator from the fallback list */
@@ -3457,8 +3460,12 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 	 * Make sure we converge to OOM if we cannot make any progress
 	 * several times in the row.
 	 */
-	if (*no_progress_loops > MAX_RECLAIM_RETRIES)
+	if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
+		/* Before OOM, exhaust highatomic_reserve */
+		if (unreserve_highatomic_pageblock(ac))
+			return true;
 		return false;
+	}
 
 	/*
 	 * Keep reclaiming pages while there is a chance this will lead
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 4/4] mm: make unreserve highatomic functions reliable
  2016-10-12  5:33 ` Minchan Kim
@ 2016-10-12  5:33   ` Minchan Kim
  -1 siblings, 0 replies; 20+ messages in thread
From: Minchan Kim @ 2016-10-12  5:33 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee, Michal Hocko, Minchan Kim

Currently, unreserve_highatomic_pageblock bails out if it found
highatomic pageblock regardless of really moving free pages
from the one so that it could mitigate unreserve logic's goal
which saves OOM of a process.

This patch makes unreserve functions bail out only if it moves
some pages out of !highatomic free list to avoid such false
positive.

Another potential problem is that by race between page freeing and
reserve highatomic function, pages could be in highatomic free list
even though the pageblock is !high atomic migratetype. In that case,
unreserve_highatomic_pageblock can be void if count of highatomic
reserve is less than pageblock_nr_pages. We could solve it simply
via draining all of reserved pages before the OOM. It would have
a safeguard role to exhuast reserved pages before converging to OOM.

Signed-off-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/page_alloc.c | 24 +++++++++++++++++-------
 1 file changed, 17 insertions(+), 7 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a7472426663f..565589eae6a2 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2079,8 +2079,12 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
  * potentially hurts the reliability of high-order allocations when under
  * intense memory pressure but failed atomic allocations should be easier
  * to recover from than an OOM.
+ *
+ * If @drain is true, try to move all of reserved pages out of highatomic
+ * free list.
  */
-static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
+static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
+						bool drain)
 {
 	struct zonelist *zonelist = ac->zonelist;
 	unsigned long flags;
@@ -2092,8 +2096,12 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
 
 	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
 								ac->nodemask) {
-		/* Preserve at least one pageblock */
-		if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
+		/*
+		 * Preserve at least one pageblock unless memory pressure
+		 * is really high.
+		 */
+		if (!drain && zone->nr_reserved_highatomic <=
+					pageblock_nr_pages)
 			continue;
 
 		spin_lock_irqsave(&zone->lock, flags);
@@ -2138,8 +2146,10 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
 			 */
 			set_pageblock_migratetype(page, ac->migratetype);
 			ret = move_freepages_block(zone, page, ac->migratetype);
-			spin_unlock_irqrestore(&zone->lock, flags);
-			return ret;
+			if (!drain && ret) {
+				spin_unlock_irqrestore(&zone->lock, flags);
+				return ret;
+			}
 		}
 		spin_unlock_irqrestore(&zone->lock, flags);
 	}
@@ -3343,7 +3353,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
 	 * Shrink them them and try again
 	 */
 	if (!page && !drained) {
-		unreserve_highatomic_pageblock(ac);
+		unreserve_highatomic_pageblock(ac, false);
 		drain_all_pages(NULL);
 		drained = true;
 		goto retry;
@@ -3462,7 +3472,7 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 	 */
 	if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
 		/* Before OOM, exhaust highatomic_reserve */
-		if (unreserve_highatomic_pageblock(ac))
+		if (unreserve_highatomic_pageblock(ac, true))
 			return true;
 		return false;
 	}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 4/4] mm: make unreserve highatomic functions reliable
@ 2016-10-12  5:33   ` Minchan Kim
  0 siblings, 0 replies; 20+ messages in thread
From: Minchan Kim @ 2016-10-12  5:33 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee, Michal Hocko, Minchan Kim

Currently, unreserve_highatomic_pageblock bails out if it found
highatomic pageblock regardless of really moving free pages
from the one so that it could mitigate unreserve logic's goal
which saves OOM of a process.

This patch makes unreserve functions bail out only if it moves
some pages out of !highatomic free list to avoid such false
positive.

Another potential problem is that by race between page freeing and
reserve highatomic function, pages could be in highatomic free list
even though the pageblock is !high atomic migratetype. In that case,
unreserve_highatomic_pageblock can be void if count of highatomic
reserve is less than pageblock_nr_pages. We could solve it simply
via draining all of reserved pages before the OOM. It would have
a safeguard role to exhuast reserved pages before converging to OOM.

Signed-off-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/page_alloc.c | 24 +++++++++++++++++-------
 1 file changed, 17 insertions(+), 7 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a7472426663f..565589eae6a2 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2079,8 +2079,12 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
  * potentially hurts the reliability of high-order allocations when under
  * intense memory pressure but failed atomic allocations should be easier
  * to recover from than an OOM.
+ *
+ * If @drain is true, try to move all of reserved pages out of highatomic
+ * free list.
  */
-static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
+static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
+						bool drain)
 {
 	struct zonelist *zonelist = ac->zonelist;
 	unsigned long flags;
@@ -2092,8 +2096,12 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
 
 	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
 								ac->nodemask) {
-		/* Preserve at least one pageblock */
-		if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
+		/*
+		 * Preserve at least one pageblock unless memory pressure
+		 * is really high.
+		 */
+		if (!drain && zone->nr_reserved_highatomic <=
+					pageblock_nr_pages)
 			continue;
 
 		spin_lock_irqsave(&zone->lock, flags);
@@ -2138,8 +2146,10 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
 			 */
 			set_pageblock_migratetype(page, ac->migratetype);
 			ret = move_freepages_block(zone, page, ac->migratetype);
-			spin_unlock_irqrestore(&zone->lock, flags);
-			return ret;
+			if (!drain && ret) {
+				spin_unlock_irqrestore(&zone->lock, flags);
+				return ret;
+			}
 		}
 		spin_unlock_irqrestore(&zone->lock, flags);
 	}
@@ -3343,7 +3353,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
 	 * Shrink them them and try again
 	 */
 	if (!page && !drained) {
-		unreserve_highatomic_pageblock(ac);
+		unreserve_highatomic_pageblock(ac, false);
 		drain_all_pages(NULL);
 		drained = true;
 		goto retry;
@@ -3462,7 +3472,7 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 	 */
 	if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
 		/* Before OOM, exhaust highatomic_reserve */
-		if (unreserve_highatomic_pageblock(ac))
+		if (unreserve_highatomic_pageblock(ac, true))
 			return true;
 		return false;
 	}
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/4] mm: try to exhaust highatomic reserve before the OOM
  2016-10-12  5:33   ` Minchan Kim
@ 2016-10-12  7:14     ` Vlastimil Babka
  -1 siblings, 0 replies; 20+ messages in thread
From: Vlastimil Babka @ 2016-10-12  7:14 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: Mel Gorman, Joonsoo Kim, linux-kernel, linux-mm, Sangseok Lee,
	Michal Hocko

On 10/12/2016 07:33 AM, Minchan Kim wrote:
> It's weird to show that zone has enough free memory above min
> watermark but OOMed with 4K GFP_KERNEL allocation due to
> reserved highatomic pages. As last resort, try to unreserve
> highatomic pages again and if it has moved pages to
> non-highatmoc free list, retry reclaim once more.

I would move the details (OOM report etc) from the cover letter here, otherwise 
they end up in Patch 1's changelog, which is less helpful.

> Signed-off-by: Michal Hocko <mhocko@suse.com>
> Signed-off-by: Minchan Kim <minchan@kernel.org>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  mm/page_alloc.c | 15 +++++++++++----
>  1 file changed, 11 insertions(+), 4 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 18808f392718..a7472426663f 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2080,7 +2080,7 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
>   * intense memory pressure but failed atomic allocations should be easier
>   * to recover from than an OOM.
>   */
> -static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> +static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  {
>  	struct zonelist *zonelist = ac->zonelist;
>  	unsigned long flags;
> @@ -2088,6 +2088,7 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  	struct zone *zone;
>  	struct page *page;
>  	int order;
> +	bool ret = false;
>
>  	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
>  								ac->nodemask) {
> @@ -2136,12 +2137,14 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  			 * may increase.
>  			 */
>  			set_pageblock_migratetype(page, ac->migratetype);
> -			move_freepages_block(zone, page, ac->migratetype);
> +			ret = move_freepages_block(zone, page, ac->migratetype);
>  			spin_unlock_irqrestore(&zone->lock, flags);
> -			return;
> +			return ret;
>  		}
>  		spin_unlock_irqrestore(&zone->lock, flags);
>  	}
> +
> +	return ret;
>  }
>
>  /* Remove an element from the buddy allocator from the fallback list */
> @@ -3457,8 +3460,12 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
>  	 * Make sure we converge to OOM if we cannot make any progress
>  	 * several times in the row.
>  	 */
> -	if (*no_progress_loops > MAX_RECLAIM_RETRIES)
> +	if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
> +		/* Before OOM, exhaust highatomic_reserve */
> +		if (unreserve_highatomic_pageblock(ac))
> +			return true;
>  		return false;
> +	}
>
>  	/*
>  	 * Keep reclaiming pages while there is a chance this will lead
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/4] mm: try to exhaust highatomic reserve before the OOM
@ 2016-10-12  7:14     ` Vlastimil Babka
  0 siblings, 0 replies; 20+ messages in thread
From: Vlastimil Babka @ 2016-10-12  7:14 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: Mel Gorman, Joonsoo Kim, linux-kernel, linux-mm, Sangseok Lee,
	Michal Hocko

On 10/12/2016 07:33 AM, Minchan Kim wrote:
> It's weird to show that zone has enough free memory above min
> watermark but OOMed with 4K GFP_KERNEL allocation due to
> reserved highatomic pages. As last resort, try to unreserve
> highatomic pages again and if it has moved pages to
> non-highatmoc free list, retry reclaim once more.

I would move the details (OOM report etc) from the cover letter here, otherwise 
they end up in Patch 1's changelog, which is less helpful.

> Signed-off-by: Michal Hocko <mhocko@suse.com>
> Signed-off-by: Minchan Kim <minchan@kernel.org>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  mm/page_alloc.c | 15 +++++++++++----
>  1 file changed, 11 insertions(+), 4 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 18808f392718..a7472426663f 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2080,7 +2080,7 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
>   * intense memory pressure but failed atomic allocations should be easier
>   * to recover from than an OOM.
>   */
> -static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> +static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  {
>  	struct zonelist *zonelist = ac->zonelist;
>  	unsigned long flags;
> @@ -2088,6 +2088,7 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  	struct zone *zone;
>  	struct page *page;
>  	int order;
> +	bool ret = false;
>
>  	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
>  								ac->nodemask) {
> @@ -2136,12 +2137,14 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  			 * may increase.
>  			 */
>  			set_pageblock_migratetype(page, ac->migratetype);
> -			move_freepages_block(zone, page, ac->migratetype);
> +			ret = move_freepages_block(zone, page, ac->migratetype);
>  			spin_unlock_irqrestore(&zone->lock, flags);
> -			return;
> +			return ret;
>  		}
>  		spin_unlock_irqrestore(&zone->lock, flags);
>  	}
> +
> +	return ret;
>  }
>
>  /* Remove an element from the buddy allocator from the fallback list */
> @@ -3457,8 +3460,12 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
>  	 * Make sure we converge to OOM if we cannot make any progress
>  	 * several times in the row.
>  	 */
> -	if (*no_progress_loops > MAX_RECLAIM_RETRIES)
> +	if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
> +		/* Before OOM, exhaust highatomic_reserve */
> +		if (unreserve_highatomic_pageblock(ac))
> +			return true;
>  		return false;
> +	}
>
>  	/*
>  	 * Keep reclaiming pages while there is a chance this will lead
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 4/4] mm: make unreserve highatomic functions reliable
  2016-10-12  5:33   ` Minchan Kim
@ 2016-10-12  7:19     ` Vlastimil Babka
  -1 siblings, 0 replies; 20+ messages in thread
From: Vlastimil Babka @ 2016-10-12  7:19 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: Mel Gorman, Joonsoo Kim, linux-kernel, linux-mm, Sangseok Lee,
	Michal Hocko

On 10/12/2016 07:33 AM, Minchan Kim wrote:
> Currently, unreserve_highatomic_pageblock bails out if it found
> highatomic pageblock regardless of really moving free pages
> from the one so that it could mitigate unreserve logic's goal
> which saves OOM of a process.
>
> This patch makes unreserve functions bail out only if it moves
> some pages out of !highatomic free list to avoid such false
> positive.
>
> Another potential problem is that by race between page freeing and
> reserve highatomic function, pages could be in highatomic free list
> even though the pageblock is !high atomic migratetype. In that case,
> unreserve_highatomic_pageblock can be void if count of highatomic
> reserve is less than pageblock_nr_pages. We could solve it simply
> via draining all of reserved pages before the OOM. It would have
> a safeguard role to exhuast reserved pages before converging to OOM.
>
> Signed-off-by: Michal Hocko <mhocko@suse.com>

Ah, I think that the first S-o-b has to match "From:" to be valid chain (also 
for 3/4).

> Signed-off-by: Minchan Kim <minchan@kernel.org>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  mm/page_alloc.c | 24 +++++++++++++++++-------
>  1 file changed, 17 insertions(+), 7 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index a7472426663f..565589eae6a2 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2079,8 +2079,12 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
>   * potentially hurts the reliability of high-order allocations when under
>   * intense memory pressure but failed atomic allocations should be easier
>   * to recover from than an OOM.
> + *
> + * If @drain is true, try to move all of reserved pages out of highatomic
> + * free list.
>   */
> -static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
> +static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
> +						bool drain)
>  {
>  	struct zonelist *zonelist = ac->zonelist;
>  	unsigned long flags;
> @@ -2092,8 +2096,12 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
>
>  	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
>  								ac->nodemask) {
> -		/* Preserve at least one pageblock */
> -		if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
> +		/*
> +		 * Preserve at least one pageblock unless memory pressure
> +		 * is really high.
> +		 */
> +		if (!drain && zone->nr_reserved_highatomic <=
> +					pageblock_nr_pages)
>  			continue;
>
>  		spin_lock_irqsave(&zone->lock, flags);
> @@ -2138,8 +2146,10 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  			 */
>  			set_pageblock_migratetype(page, ac->migratetype);
>  			ret = move_freepages_block(zone, page, ac->migratetype);
> -			spin_unlock_irqrestore(&zone->lock, flags);
> -			return ret;
> +			if (!drain && ret) {
> +				spin_unlock_irqrestore(&zone->lock, flags);
> +				return ret;
> +			}
>  		}
>  		spin_unlock_irqrestore(&zone->lock, flags);
>  	}
> @@ -3343,7 +3353,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
>  	 * Shrink them them and try again
>  	 */
>  	if (!page && !drained) {
> -		unreserve_highatomic_pageblock(ac);
> +		unreserve_highatomic_pageblock(ac, false);
>  		drain_all_pages(NULL);
>  		drained = true;
>  		goto retry;
> @@ -3462,7 +3472,7 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
>  	 */
>  	if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
>  		/* Before OOM, exhaust highatomic_reserve */
> -		if (unreserve_highatomic_pageblock(ac))
> +		if (unreserve_highatomic_pageblock(ac, true))
>  			return true;
>  		return false;
>  	}
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 4/4] mm: make unreserve highatomic functions reliable
@ 2016-10-12  7:19     ` Vlastimil Babka
  0 siblings, 0 replies; 20+ messages in thread
From: Vlastimil Babka @ 2016-10-12  7:19 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: Mel Gorman, Joonsoo Kim, linux-kernel, linux-mm, Sangseok Lee,
	Michal Hocko

On 10/12/2016 07:33 AM, Minchan Kim wrote:
> Currently, unreserve_highatomic_pageblock bails out if it found
> highatomic pageblock regardless of really moving free pages
> from the one so that it could mitigate unreserve logic's goal
> which saves OOM of a process.
>
> This patch makes unreserve functions bail out only if it moves
> some pages out of !highatomic free list to avoid such false
> positive.
>
> Another potential problem is that by race between page freeing and
> reserve highatomic function, pages could be in highatomic free list
> even though the pageblock is !high atomic migratetype. In that case,
> unreserve_highatomic_pageblock can be void if count of highatomic
> reserve is less than pageblock_nr_pages. We could solve it simply
> via draining all of reserved pages before the OOM. It would have
> a safeguard role to exhuast reserved pages before converging to OOM.
>
> Signed-off-by: Michal Hocko <mhocko@suse.com>

Ah, I think that the first S-o-b has to match "From:" to be valid chain (also 
for 3/4).

> Signed-off-by: Minchan Kim <minchan@kernel.org>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  mm/page_alloc.c | 24 +++++++++++++++++-------
>  1 file changed, 17 insertions(+), 7 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index a7472426663f..565589eae6a2 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2079,8 +2079,12 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
>   * potentially hurts the reliability of high-order allocations when under
>   * intense memory pressure but failed atomic allocations should be easier
>   * to recover from than an OOM.
> + *
> + * If @drain is true, try to move all of reserved pages out of highatomic
> + * free list.
>   */
> -static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
> +static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
> +						bool drain)
>  {
>  	struct zonelist *zonelist = ac->zonelist;
>  	unsigned long flags;
> @@ -2092,8 +2096,12 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
>
>  	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
>  								ac->nodemask) {
> -		/* Preserve at least one pageblock */
> -		if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
> +		/*
> +		 * Preserve at least one pageblock unless memory pressure
> +		 * is really high.
> +		 */
> +		if (!drain && zone->nr_reserved_highatomic <=
> +					pageblock_nr_pages)
>  			continue;
>
>  		spin_lock_irqsave(&zone->lock, flags);
> @@ -2138,8 +2146,10 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  			 */
>  			set_pageblock_migratetype(page, ac->migratetype);
>  			ret = move_freepages_block(zone, page, ac->migratetype);
> -			spin_unlock_irqrestore(&zone->lock, flags);
> -			return ret;
> +			if (!drain && ret) {
> +				spin_unlock_irqrestore(&zone->lock, flags);
> +				return ret;
> +			}
>  		}
>  		spin_unlock_irqrestore(&zone->lock, flags);
>  	}
> @@ -3343,7 +3353,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
>  	 * Shrink them them and try again
>  	 */
>  	if (!page && !drained) {
> -		unreserve_highatomic_pageblock(ac);
> +		unreserve_highatomic_pageblock(ac, false);
>  		drain_all_pages(NULL);
>  		drained = true;
>  		goto retry;
> @@ -3462,7 +3472,7 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
>  	 */
>  	if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
>  		/* Before OOM, exhaust highatomic_reserve */
> -		if (unreserve_highatomic_pageblock(ac))
> +		if (unreserve_highatomic_pageblock(ac, true))
>  			return true;
>  		return false;
>  	}
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/4] mm: try to exhaust highatomic reserve before the OOM
  2016-10-12  5:33   ` Minchan Kim
@ 2016-10-12  7:24     ` Michal Hocko
  -1 siblings, 0 replies; 20+ messages in thread
From: Michal Hocko @ 2016-10-12  7:24 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Wed 12-10-16 14:33:35, Minchan Kim wrote:
> It's weird to show that zone has enough free memory above min
> watermark but OOMed with 4K GFP_KERNEL allocation due to
> reserved highatomic pages. As last resort, try to unreserve
> highatomic pages again and if it has moved pages to
> non-highatmoc free list, retry reclaim once more.

Agreed with Vlastimil on the OOM report in the changelog. The above will
not tell the reader much to understand how does the situation look like
and whether the patch is really needed in his particular situation.

Few nits below but in general looks good to me

> Signed-off-by: Michal Hocko <mhocko@suse.com>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  mm/page_alloc.c | 15 +++++++++++----
>  1 file changed, 11 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 18808f392718..a7472426663f 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2080,7 +2080,7 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
>   * intense memory pressure but failed atomic allocations should be easier
>   * to recover from than an OOM.
>   */
> -static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> +static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  {
>  	struct zonelist *zonelist = ac->zonelist;
>  	unsigned long flags;
> @@ -2088,6 +2088,7 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  	struct zone *zone;
>  	struct page *page;
>  	int order;
> +	bool ret = false;

no need to initialization, see below
>  
>  	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
>  								ac->nodemask) {
> @@ -2136,12 +2137,14 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  			 * may increase.
>  			 */
>  			set_pageblock_migratetype(page, ac->migratetype);
> -			move_freepages_block(zone, page, ac->migratetype);
> +			ret = move_freepages_block(zone, page, ac->migratetype);
>  			spin_unlock_irqrestore(&zone->lock, flags);
> -			return;
> +			return ret;
>  		}
>  		spin_unlock_irqrestore(&zone->lock, flags);
>  	}
> +
> +	return ret;

	return false;
>  }
>  
>  /* Remove an element from the buddy allocator from the fallback list */
> @@ -3457,8 +3460,12 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
>  	 * Make sure we converge to OOM if we cannot make any progress
>  	 * several times in the row.
>  	 */
> -	if (*no_progress_loops > MAX_RECLAIM_RETRIES)
> +	if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
> +		/* Before OOM, exhaust highatomic_reserve */
> +		if (unreserve_highatomic_pageblock(ac))
> +			return true;

		return unreserve_highatomic_pageblock(ac);

>  		return false;
> +	}
>  
>  	/*
>  	 * Keep reclaiming pages while there is a chance this will lead
> -- 
> 2.7.4
> 

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/4] mm: try to exhaust highatomic reserve before the OOM
@ 2016-10-12  7:24     ` Michal Hocko
  0 siblings, 0 replies; 20+ messages in thread
From: Michal Hocko @ 2016-10-12  7:24 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Wed 12-10-16 14:33:35, Minchan Kim wrote:
> It's weird to show that zone has enough free memory above min
> watermark but OOMed with 4K GFP_KERNEL allocation due to
> reserved highatomic pages. As last resort, try to unreserve
> highatomic pages again and if it has moved pages to
> non-highatmoc free list, retry reclaim once more.

Agreed with Vlastimil on the OOM report in the changelog. The above will
not tell the reader much to understand how does the situation look like
and whether the patch is really needed in his particular situation.

Few nits below but in general looks good to me

> Signed-off-by: Michal Hocko <mhocko@suse.com>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  mm/page_alloc.c | 15 +++++++++++----
>  1 file changed, 11 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 18808f392718..a7472426663f 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2080,7 +2080,7 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
>   * intense memory pressure but failed atomic allocations should be easier
>   * to recover from than an OOM.
>   */
> -static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> +static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  {
>  	struct zonelist *zonelist = ac->zonelist;
>  	unsigned long flags;
> @@ -2088,6 +2088,7 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  	struct zone *zone;
>  	struct page *page;
>  	int order;
> +	bool ret = false;

no need to initialization, see below
>  
>  	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
>  								ac->nodemask) {
> @@ -2136,12 +2137,14 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  			 * may increase.
>  			 */
>  			set_pageblock_migratetype(page, ac->migratetype);
> -			move_freepages_block(zone, page, ac->migratetype);
> +			ret = move_freepages_block(zone, page, ac->migratetype);
>  			spin_unlock_irqrestore(&zone->lock, flags);
> -			return;
> +			return ret;
>  		}
>  		spin_unlock_irqrestore(&zone->lock, flags);
>  	}
> +
> +	return ret;

	return false;
>  }
>  
>  /* Remove an element from the buddy allocator from the fallback list */
> @@ -3457,8 +3460,12 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
>  	 * Make sure we converge to OOM if we cannot make any progress
>  	 * several times in the row.
>  	 */
> -	if (*no_progress_loops > MAX_RECLAIM_RETRIES)
> +	if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
> +		/* Before OOM, exhaust highatomic_reserve */
> +		if (unreserve_highatomic_pageblock(ac))
> +			return true;

		return unreserve_highatomic_pageblock(ac);

>  		return false;
> +	}
>  
>  	/*
>  	 * Keep reclaiming pages while there is a chance this will lead
> -- 
> 2.7.4
> 

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 4/4] mm: make unreserve highatomic functions reliable
  2016-10-12  5:33   ` Minchan Kim
@ 2016-10-12  7:33     ` Michal Hocko
  -1 siblings, 0 replies; 20+ messages in thread
From: Michal Hocko @ 2016-10-12  7:33 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Wed 12-10-16 14:33:36, Minchan Kim wrote:
[...]
> @@ -2138,8 +2146,10 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  			 */
>  			set_pageblock_migratetype(page, ac->migratetype);
>  			ret = move_freepages_block(zone, page, ac->migratetype);
> -			spin_unlock_irqrestore(&zone->lock, flags);
> -			return ret;
> +			if (!drain && ret) {
> +				spin_unlock_irqrestore(&zone->lock, flags);
> +				return ret;
> +			}

I've already mentioned that during the previous discussion. This sounds
overly aggressive to me. Why do we want to drain the whole reserve and
risk that we won't be able to build up a new one after OOM. Doing one
block at the time should be sufficient IMHO.

			if (ret) {
				spin_unlock_irqrestore(&zone->lock, flags);
				return ret;
			}

will do the trick and work both for drain and !drain cases which is a
good thing. Because even !drain case would like to see a block freed.
The only difference between those two is that the drain one would really
like to free something and ignore the "at least one block" reserve.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 4/4] mm: make unreserve highatomic functions reliable
@ 2016-10-12  7:33     ` Michal Hocko
  0 siblings, 0 replies; 20+ messages in thread
From: Michal Hocko @ 2016-10-12  7:33 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Wed 12-10-16 14:33:36, Minchan Kim wrote:
[...]
> @@ -2138,8 +2146,10 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  			 */
>  			set_pageblock_migratetype(page, ac->migratetype);
>  			ret = move_freepages_block(zone, page, ac->migratetype);
> -			spin_unlock_irqrestore(&zone->lock, flags);
> -			return ret;
> +			if (!drain && ret) {
> +				spin_unlock_irqrestore(&zone->lock, flags);
> +				return ret;
> +			}

I've already mentioned that during the previous discussion. This sounds
overly aggressive to me. Why do we want to drain the whole reserve and
risk that we won't be able to build up a new one after OOM. Doing one
block at the time should be sufficient IMHO.

			if (ret) {
				spin_unlock_irqrestore(&zone->lock, flags);
				return ret;
			}

will do the trick and work both for drain and !drain cases which is a
good thing. Because even !drain case would like to see a block freed.
The only difference between those two is that the drain one would really
like to free something and ignore the "at least one block" reserve.

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 4/4] mm: make unreserve highatomic functions reliable
  2016-10-12  7:33     ` Michal Hocko
@ 2016-10-12  7:48       ` Minchan Kim
  -1 siblings, 0 replies; 20+ messages in thread
From: Minchan Kim @ 2016-10-12  7:48 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Wed, Oct 12, 2016 at 09:33:28AM +0200, Michal Hocko wrote:
> On Wed 12-10-16 14:33:36, Minchan Kim wrote:
> [...]
> > @@ -2138,8 +2146,10 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
> >  			 */
> >  			set_pageblock_migratetype(page, ac->migratetype);
> >  			ret = move_freepages_block(zone, page, ac->migratetype);
> > -			spin_unlock_irqrestore(&zone->lock, flags);
> > -			return ret;
> > +			if (!drain && ret) {
> > +				spin_unlock_irqrestore(&zone->lock, flags);
> > +				return ret;
> > +			}
> 
> I've already mentioned that during the previous discussion. This sounds

Yeb, we did but I sent wrong version in my git tree. :(

> overly aggressive to me. Why do we want to drain the whole reserve and
> risk that we won't be able to build up a new one after OOM. Doing one
> block at the time should be sufficient IMHO.

I will resend with updating with every reveiw points.

Thanks.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 4/4] mm: make unreserve highatomic functions reliable
@ 2016-10-12  7:48       ` Minchan Kim
  0 siblings, 0 replies; 20+ messages in thread
From: Minchan Kim @ 2016-10-12  7:48 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Wed, Oct 12, 2016 at 09:33:28AM +0200, Michal Hocko wrote:
> On Wed 12-10-16 14:33:36, Minchan Kim wrote:
> [...]
> > @@ -2138,8 +2146,10 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac)
> >  			 */
> >  			set_pageblock_migratetype(page, ac->migratetype);
> >  			ret = move_freepages_block(zone, page, ac->migratetype);
> > -			spin_unlock_irqrestore(&zone->lock, flags);
> > -			return ret;
> > +			if (!drain && ret) {
> > +				spin_unlock_irqrestore(&zone->lock, flags);
> > +				return ret;
> > +			}
> 
> I've already mentioned that during the previous discussion. This sounds

Yeb, we did but I sent wrong version in my git tree. :(

> overly aggressive to me. Why do we want to drain the whole reserve and
> risk that we won't be able to build up a new one after OOM. Doing one
> block at the time should be sufficient IMHO.

I will resend with updating with every reveiw points.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2016-10-12  8:32 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-10-12  5:33 [PATCH v2 0/4] use up highorder free pages before OOM Minchan Kim
2016-10-12  5:33 ` Minchan Kim
2016-10-12  5:33 ` [PATCH v2 1/4] mm: don't steal highatomic pageblock Minchan Kim
2016-10-12  5:33   ` Minchan Kim
2016-10-12  5:33 ` [PATCH v2 2/4] mm: prevent double decrease of nr_reserved_highatomic Minchan Kim
2016-10-12  5:33   ` Minchan Kim
2016-10-12  5:33 ` [PATCH v2 3/4] mm: try to exhaust highatomic reserve before the OOM Minchan Kim
2016-10-12  5:33   ` Minchan Kim
2016-10-12  7:14   ` Vlastimil Babka
2016-10-12  7:14     ` Vlastimil Babka
2016-10-12  7:24   ` Michal Hocko
2016-10-12  7:24     ` Michal Hocko
2016-10-12  5:33 ` [PATCH v2 4/4] mm: make unreserve highatomic functions reliable Minchan Kim
2016-10-12  5:33   ` Minchan Kim
2016-10-12  7:19   ` Vlastimil Babka
2016-10-12  7:19     ` Vlastimil Babka
2016-10-12  7:33   ` Michal Hocko
2016-10-12  7:33     ` Michal Hocko
2016-10-12  7:48     ` Minchan Kim
2016-10-12  7:48       ` Minchan Kim

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.