All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] use up highorder free pages before OOM
@ 2016-10-07  5:45 ` Minchan Kim
  0 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-07  5:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee, Minchan Kim

I got OOM report from production team with v4.4 kernel.
It has enough free memory but failed to allocate order-0 page and
finally encounter OOM kill.
I could reproduce it with my test easily. Look at below.
The reason is free pages(19M) of DMA32 zone are reserved for
HIGHORDERATOMIC and doesn't unreserved before the OOM.

balloon invoked oom-killer: gfp_mask=0x24280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), order=0, oom_score_adj=0
balloon cpuset=/ mems_allowed=0
CPU: 1 PID: 8473 Comm: balloon Tainted: G        W  OE   4.8.0-rc7-00219-g3f74c9559583-dirty #3161
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
 0000000000000000 ffff88007f15bbc8 ffffffff8138eb13 ffff88007f15bd88
 ffff88005a72a4c0 ffff88007f15bc28 ffffffff811d2d13 ffff88007f15bc08
 ffffffff8146a5ca ffffffff81c8df60 0000000000000015 0000000000000206
Call Trace:
 [<ffffffff8138eb13>] dump_stack+0x63/0x90
 [<ffffffff811d2d13>] dump_header+0x5c/0x1ce
 [<ffffffff8146a5ca>] ? virtballoon_oom_notify+0x2a/0x80
 [<ffffffff81171e5e>] oom_kill_process+0x22e/0x400
 [<ffffffff8117222c>] out_of_memory+0x1ac/0x210
 [<ffffffff811775ce>] __alloc_pages_nodemask+0x101e/0x1040
 [<ffffffff811a245a>] handle_mm_fault+0xa0a/0xbf0
 [<ffffffff8106029d>] __do_page_fault+0x1dd/0x4d0
 [<ffffffff81060653>] trace_do_page_fault+0x43/0x130
 [<ffffffff81059bda>] do_async_page_fault+0x1a/0xa0
 [<ffffffff817a3f38>] async_page_fault+0x28/0x30
Mem-Info:
active_anon:383949 inactive_anon:106724 isolated_anon:0
 active_file:15 inactive_file:44 isolated_file:0
 unevictable:0 dirty:0 writeback:24 unstable:0
 slab_reclaimable:2483 slab_unreclaimable:3326
 mapped:0 shmem:0 pagetables:1906 bounce:0
 free:6898 free_pcp:291 free_cma:0
Node 0 active_anon:1535796kB inactive_anon:426896kB active_file:60kB inactive_file:176kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:0kB dirty:0kB writeback:96kB shmem:0kB writeback_tmp:0kB unstable:0kB pages_scanned:1418 all_unreclaimable? no
DMA free:8188kB min:44kB low:56kB high:68kB active_anon:7648kB inactive_anon:0kB active_file:0kB inactive_file:4kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:20kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
lowmem_reserve[]: 0 1952 1952 1952
DMA32 free:19404kB min:5628kB low:7624kB high:9620kB active_anon:1528148kB inactive_anon:426896kB active_file:60kB inactive_file:420kB unevictable:0kB writepending:96kB present:2080640kB managed:2030092kB mlocked:0kB slab_reclaimable:9932kB slab_unreclaimable:13284kB kernel_stack:2496kB pagetables:7624kB bounce:0kB free_pcp:900kB local_pcp:112kB free_cma:0kB
lowmem_reserve[]: 0 0 0 0
DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 2*4096kB (H) = 8192kB
DMA32: 7*4kB (H) 8*8kB (H) 30*16kB (H) 31*32kB (H) 14*64kB (H) 9*128kB (H) 2*256kB (H) 2*512kB (H) 4*1024kB (H) 5*2048kB (H) 0*4096kB = 19484kB
51131 total pagecache pages
50795 pages in swap cache
Swap cache stats: add 3532405601, delete 3532354806, find 124289150/1822712228
Free swap  = 8kB
Total swap = 255996kB
524158 pages RAM
0 pages HighMem/MovableOnly
12658 pages reserved
0 pages cma reserved
0 pages hwpoisoned

During the investigation, I found some problems with highatomic so
this patch aims to solve the problems and the final goal is to
unreserve every highatomic free pages before the OOM kill.

Patch 1 fixes accounting bug in several places of page allocators
Patch 2 fixes accounting bug caused by subtle race between freeing
function and unreserve_highatomic_pageblock.
Patch 3 changes unreseve scheme to use up every reserved pages
Patch 4 fixes accounting bug caused by mem_section shared by two zones.

Minchan Kim (4):
  mm: adjust reserved highatomic count
  mm: prevent double decrease of nr_reserved_highatomic
  mm: unreserve highatomic free pages fully before OOM
  mm: skip to reserve pageblock crossed zone boundary for HIGHATOMIC

 mm/page_alloc.c | 143 ++++++++++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 118 insertions(+), 25 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 0/4] use up highorder free pages before OOM
@ 2016-10-07  5:45 ` Minchan Kim
  0 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-07  5:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee, Minchan Kim

I got OOM report from production team with v4.4 kernel.
It has enough free memory but failed to allocate order-0 page and
finally encounter OOM kill.
I could reproduce it with my test easily. Look at below.
The reason is free pages(19M) of DMA32 zone are reserved for
HIGHORDERATOMIC and doesn't unreserved before the OOM.

balloon invoked oom-killer: gfp_mask=0x24280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), order=0, oom_score_adj=0
balloon cpuset=/ mems_allowed=0
CPU: 1 PID: 8473 Comm: balloon Tainted: G        W  OE   4.8.0-rc7-00219-g3f74c9559583-dirty #3161
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
 0000000000000000 ffff88007f15bbc8 ffffffff8138eb13 ffff88007f15bd88
 ffff88005a72a4c0 ffff88007f15bc28 ffffffff811d2d13 ffff88007f15bc08
 ffffffff8146a5ca ffffffff81c8df60 0000000000000015 0000000000000206
Call Trace:
 [<ffffffff8138eb13>] dump_stack+0x63/0x90
 [<ffffffff811d2d13>] dump_header+0x5c/0x1ce
 [<ffffffff8146a5ca>] ? virtballoon_oom_notify+0x2a/0x80
 [<ffffffff81171e5e>] oom_kill_process+0x22e/0x400
 [<ffffffff8117222c>] out_of_memory+0x1ac/0x210
 [<ffffffff811775ce>] __alloc_pages_nodemask+0x101e/0x1040
 [<ffffffff811a245a>] handle_mm_fault+0xa0a/0xbf0
 [<ffffffff8106029d>] __do_page_fault+0x1dd/0x4d0
 [<ffffffff81060653>] trace_do_page_fault+0x43/0x130
 [<ffffffff81059bda>] do_async_page_fault+0x1a/0xa0
 [<ffffffff817a3f38>] async_page_fault+0x28/0x30
Mem-Info:
active_anon:383949 inactive_anon:106724 isolated_anon:0
 active_file:15 inactive_file:44 isolated_file:0
 unevictable:0 dirty:0 writeback:24 unstable:0
 slab_reclaimable:2483 slab_unreclaimable:3326
 mapped:0 shmem:0 pagetables:1906 bounce:0
 free:6898 free_pcp:291 free_cma:0
Node 0 active_anon:1535796kB inactive_anon:426896kB active_file:60kB inactive_file:176kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:0kB dirty:0kB writeback:96kB shmem:0kB writeback_tmp:0kB unstable:0kB pages_scanned:1418 all_unreclaimable? no
DMA free:8188kB min:44kB low:56kB high:68kB active_anon:7648kB inactive_anon:0kB active_file:0kB inactive_file:4kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:20kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
lowmem_reserve[]: 0 1952 1952 1952
DMA32 free:19404kB min:5628kB low:7624kB high:9620kB active_anon:1528148kB inactive_anon:426896kB active_file:60kB inactive_file:420kB unevictable:0kB writepending:96kB present:2080640kB managed:2030092kB mlocked:0kB slab_reclaimable:9932kB slab_unreclaimable:13284kB kernel_stack:2496kB pagetables:7624kB bounce:0kB free_pcp:900kB local_pcp:112kB free_cma:0kB
lowmem_reserve[]: 0 0 0 0
DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 2*4096kB (H) = 8192kB
DMA32: 7*4kB (H) 8*8kB (H) 30*16kB (H) 31*32kB (H) 14*64kB (H) 9*128kB (H) 2*256kB (H) 2*512kB (H) 4*1024kB (H) 5*2048kB (H) 0*4096kB = 19484kB
51131 total pagecache pages
50795 pages in swap cache
Swap cache stats: add 3532405601, delete 3532354806, find 124289150/1822712228
Free swap  = 8kB
Total swap = 255996kB
524158 pages RAM
0 pages HighMem/MovableOnly
12658 pages reserved
0 pages cma reserved
0 pages hwpoisoned

During the investigation, I found some problems with highatomic so
this patch aims to solve the problems and the final goal is to
unreserve every highatomic free pages before the OOM kill.

Patch 1 fixes accounting bug in several places of page allocators
Patch 2 fixes accounting bug caused by subtle race between freeing
function and unreserve_highatomic_pageblock.
Patch 3 changes unreseve scheme to use up every reserved pages
Patch 4 fixes accounting bug caused by mem_section shared by two zones.

Minchan Kim (4):
  mm: adjust reserved highatomic count
  mm: prevent double decrease of nr_reserved_highatomic
  mm: unreserve highatomic free pages fully before OOM
  mm: skip to reserve pageblock crossed zone boundary for HIGHATOMIC

 mm/page_alloc.c | 143 ++++++++++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 118 insertions(+), 25 deletions(-)

-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* [PATCH 1/4] mm: adjust reserved highatomic count
  2016-10-07  5:45 ` Minchan Kim
@ 2016-10-07  5:45   ` Minchan Kim
  -1 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-07  5:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee, Minchan Kim

In page freeing path, migratetype is racy so that a highorderatomic
page could free into non-highorderatomic free list. If that page
is allocated, VM can change the pageblock from higorderatomic to
something. In that case, we should adjust nr_reserved_highatomic.
Otherwise, VM cannot reserve highorderatomic pageblocks any more
although it doesn't reach 1% limit. It means highorder atomic
allocation failure would be higher.

So, this patch decreases the account as well as migratetype
if it was MIGRATE_HIGHATOMIC.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/page_alloc.c | 44 ++++++++++++++++++++++++++++++++++++++------
 1 file changed, 38 insertions(+), 6 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 55ad0229ebf3..e7cbb3cc22fa 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -282,6 +282,9 @@ EXPORT_SYMBOL(nr_node_ids);
 EXPORT_SYMBOL(nr_online_nodes);
 #endif
 
+static void dec_highatomic_pageblock(struct zone *zone, struct page *page,
+					int migratetype);
+
 int page_group_by_mobility_disabled __read_mostly;
 
 #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
@@ -1935,7 +1938,14 @@ static void change_pageblock_range(struct page *pageblock_page,
 	int nr_pageblocks = 1 << (start_order - pageblock_order);
 
 	while (nr_pageblocks--) {
-		set_pageblock_migratetype(pageblock_page, migratetype);
+		if (get_pageblock_migratetype(pageblock_page) !=
+			MIGRATE_HIGHATOMIC)
+			set_pageblock_migratetype(pageblock_page,
+							migratetype);
+		else
+			dec_highatomic_pageblock(page_zone(pageblock_page),
+							pageblock_page,
+							migratetype);
 		pageblock_page += pageblock_nr_pages;
 	}
 }
@@ -1996,8 +2006,14 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
 
 	/* Claim the whole block if over half of it is free */
 	if (pages >= (1 << (pageblock_order-1)) ||
-			page_group_by_mobility_disabled)
-		set_pageblock_migratetype(page, start_type);
+			page_group_by_mobility_disabled) {
+		int mt = get_pageblock_migratetype(page);
+
+		if (mt != MIGRATE_HIGHATOMIC)
+			set_pageblock_migratetype(page, start_type);
+		else
+			dec_highatomic_pageblock(zone, page, start_type);
+	}
 }
 
 /*
@@ -2037,6 +2053,17 @@ int find_suitable_fallback(struct free_area *area, unsigned int order,
 	return -1;
 }
 
+static void dec_highatomic_pageblock(struct zone *zone, struct page *page,
+					int migratetype)
+{
+	if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
+		return;
+
+	zone->nr_reserved_highatomic -= min(pageblock_nr_pages,
+					zone->nr_reserved_highatomic);
+	set_pageblock_migratetype(page, migratetype);
+}
+
 /*
  * Reserve a pageblock for exclusive use of high-order atomic allocations if
  * there are no empty page blocks that contain a page with a suitable order
@@ -2555,9 +2582,14 @@ int __isolate_free_page(struct page *page, unsigned int order)
 		struct page *endpage = page + (1 << order) - 1;
 		for (; page < endpage; page += pageblock_nr_pages) {
 			int mt = get_pageblock_migratetype(page);
-			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt))
-				set_pageblock_migratetype(page,
-							  MIGRATE_MOVABLE);
+			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt)) {
+				if (mt != MIGRATE_HIGHATOMIC)
+					set_pageblock_migratetype(page,
+							MIGRATE_MOVABLE);
+				else
+					dec_highatomic_pageblock(zone, page,
+							MIGRATE_MOVABLE);
+			}
 		}
 	}
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH 1/4] mm: adjust reserved highatomic count
@ 2016-10-07  5:45   ` Minchan Kim
  0 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-07  5:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee, Minchan Kim

In page freeing path, migratetype is racy so that a highorderatomic
page could free into non-highorderatomic free list. If that page
is allocated, VM can change the pageblock from higorderatomic to
something. In that case, we should adjust nr_reserved_highatomic.
Otherwise, VM cannot reserve highorderatomic pageblocks any more
although it doesn't reach 1% limit. It means highorder atomic
allocation failure would be higher.

So, this patch decreases the account as well as migratetype
if it was MIGRATE_HIGHATOMIC.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/page_alloc.c | 44 ++++++++++++++++++++++++++++++++++++++------
 1 file changed, 38 insertions(+), 6 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 55ad0229ebf3..e7cbb3cc22fa 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -282,6 +282,9 @@ EXPORT_SYMBOL(nr_node_ids);
 EXPORT_SYMBOL(nr_online_nodes);
 #endif
 
+static void dec_highatomic_pageblock(struct zone *zone, struct page *page,
+					int migratetype);
+
 int page_group_by_mobility_disabled __read_mostly;
 
 #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
@@ -1935,7 +1938,14 @@ static void change_pageblock_range(struct page *pageblock_page,
 	int nr_pageblocks = 1 << (start_order - pageblock_order);
 
 	while (nr_pageblocks--) {
-		set_pageblock_migratetype(pageblock_page, migratetype);
+		if (get_pageblock_migratetype(pageblock_page) !=
+			MIGRATE_HIGHATOMIC)
+			set_pageblock_migratetype(pageblock_page,
+							migratetype);
+		else
+			dec_highatomic_pageblock(page_zone(pageblock_page),
+							pageblock_page,
+							migratetype);
 		pageblock_page += pageblock_nr_pages;
 	}
 }
@@ -1996,8 +2006,14 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
 
 	/* Claim the whole block if over half of it is free */
 	if (pages >= (1 << (pageblock_order-1)) ||
-			page_group_by_mobility_disabled)
-		set_pageblock_migratetype(page, start_type);
+			page_group_by_mobility_disabled) {
+		int mt = get_pageblock_migratetype(page);
+
+		if (mt != MIGRATE_HIGHATOMIC)
+			set_pageblock_migratetype(page, start_type);
+		else
+			dec_highatomic_pageblock(zone, page, start_type);
+	}
 }
 
 /*
@@ -2037,6 +2053,17 @@ int find_suitable_fallback(struct free_area *area, unsigned int order,
 	return -1;
 }
 
+static void dec_highatomic_pageblock(struct zone *zone, struct page *page,
+					int migratetype)
+{
+	if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
+		return;
+
+	zone->nr_reserved_highatomic -= min(pageblock_nr_pages,
+					zone->nr_reserved_highatomic);
+	set_pageblock_migratetype(page, migratetype);
+}
+
 /*
  * Reserve a pageblock for exclusive use of high-order atomic allocations if
  * there are no empty page blocks that contain a page with a suitable order
@@ -2555,9 +2582,14 @@ int __isolate_free_page(struct page *page, unsigned int order)
 		struct page *endpage = page + (1 << order) - 1;
 		for (; page < endpage; page += pageblock_nr_pages) {
 			int mt = get_pageblock_migratetype(page);
-			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt))
-				set_pageblock_migratetype(page,
-							  MIGRATE_MOVABLE);
+			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt)) {
+				if (mt != MIGRATE_HIGHATOMIC)
+					set_pageblock_migratetype(page,
+							MIGRATE_MOVABLE);
+				else
+					dec_highatomic_pageblock(zone, page,
+							MIGRATE_MOVABLE);
+			}
 		}
 	}
 
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH 2/4] mm: prevent double decrease of nr_reserved_highatomic
  2016-10-07  5:45 ` Minchan Kim
@ 2016-10-07  5:45   ` Minchan Kim
  -1 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-07  5:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee, Minchan Kim

There is race between page freeing and unreserved highatomic.

 CPU 0				    CPU 1

    free_hot_cold_page
      mt = get_pfnblock_migratetype
      set_pcppage_migratetype(page, mt)
    				    unreserve_highatomic_pageblock
    				    spin_lock_irqsave(&zone->lock)
    				    move_freepages_block
    				    set_pageblock_migratetype(page)
    				    spin_unlock_irqrestore(&zone->lock)
      free_pcppages_bulk
        __free_one_page(mt) <- mt is stale

By above race, a page on CPU 0 could go non-highorderatomic free list
since the pageblock's type is changed. By that, unreserve logic of
highorderatomic can decrease reserved count on a same pageblock
several times and then it will make mismatch between
nr_reserved_highatomic and the number of reserved pageblock.

So, this patch verifies whether the pageblock is highatomic or not
and decrease the count only if the pageblock is highatomic.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/page_alloc.c | 24 ++++++++++++++++++------
 1 file changed, 18 insertions(+), 6 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e7cbb3cc22fa..d110cd640264 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2133,13 +2133,25 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 				continue;
 
 			/*
-			 * It should never happen but changes to locking could
-			 * inadvertently allow a per-cpu drain to add pages
-			 * to MIGRATE_HIGHATOMIC while unreserving so be safe
-			 * and watch for underflows.
+			 * In page freeing path, migratetype change is racy so
+			 * we can counter several free pages in a pageblock
+			 * in this loop althoug we changed the pageblock type
+			 * from highatomic to ac->migratetype. So we should
+			 * adjust the count once.
 			 */
-			zone->nr_reserved_highatomic -= min(pageblock_nr_pages,
-				zone->nr_reserved_highatomic);
+			if (get_pageblock_migratetype(page) ==
+							MIGRATE_HIGHATOMIC) {
+				/*
+				 * It should never happen but changes to
+				 * locking could inadvertently allow a per-cpu
+				 * drain to add pages to MIGRATE_HIGHATOMIC
+				 * while unreserving so be safe and watch for
+				 * underflows.
+				 */
+				zone->nr_reserved_highatomic -= min(
+						pageblock_nr_pages,
+						zone->nr_reserved_highatomic);
+			}
 
 			/*
 			 * Convert to ac->migratetype and avoid the normal
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH 2/4] mm: prevent double decrease of nr_reserved_highatomic
@ 2016-10-07  5:45   ` Minchan Kim
  0 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-07  5:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee, Minchan Kim

There is race between page freeing and unreserved highatomic.

 CPU 0				    CPU 1

    free_hot_cold_page
      mt = get_pfnblock_migratetype
      set_pcppage_migratetype(page, mt)
    				    unreserve_highatomic_pageblock
    				    spin_lock_irqsave(&zone->lock)
    				    move_freepages_block
    				    set_pageblock_migratetype(page)
    				    spin_unlock_irqrestore(&zone->lock)
      free_pcppages_bulk
        __free_one_page(mt) <- mt is stale

By above race, a page on CPU 0 could go non-highorderatomic free list
since the pageblock's type is changed. By that, unreserve logic of
highorderatomic can decrease reserved count on a same pageblock
several times and then it will make mismatch between
nr_reserved_highatomic and the number of reserved pageblock.

So, this patch verifies whether the pageblock is highatomic or not
and decrease the count only if the pageblock is highatomic.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/page_alloc.c | 24 ++++++++++++++++++------
 1 file changed, 18 insertions(+), 6 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e7cbb3cc22fa..d110cd640264 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2133,13 +2133,25 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 				continue;
 
 			/*
-			 * It should never happen but changes to locking could
-			 * inadvertently allow a per-cpu drain to add pages
-			 * to MIGRATE_HIGHATOMIC while unreserving so be safe
-			 * and watch for underflows.
+			 * In page freeing path, migratetype change is racy so
+			 * we can counter several free pages in a pageblock
+			 * in this loop althoug we changed the pageblock type
+			 * from highatomic to ac->migratetype. So we should
+			 * adjust the count once.
 			 */
-			zone->nr_reserved_highatomic -= min(pageblock_nr_pages,
-				zone->nr_reserved_highatomic);
+			if (get_pageblock_migratetype(page) ==
+							MIGRATE_HIGHATOMIC) {
+				/*
+				 * It should never happen but changes to
+				 * locking could inadvertently allow a per-cpu
+				 * drain to add pages to MIGRATE_HIGHATOMIC
+				 * while unreserving so be safe and watch for
+				 * underflows.
+				 */
+				zone->nr_reserved_highatomic -= min(
+						pageblock_nr_pages,
+						zone->nr_reserved_highatomic);
+			}
 
 			/*
 			 * Convert to ac->migratetype and avoid the normal
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
  2016-10-07  5:45 ` Minchan Kim
@ 2016-10-07  5:45   ` Minchan Kim
  -1 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-07  5:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee, Minchan Kim

After fixing the race of highatomic page count, I still encounter
OOM with many free memory reserved as highatomic.

One of reason in my testing was we unreserve free pages only if
reclaim has progress. Otherwise, we cannot have chance to unreseve.

Other problem after fixing it was it doesn't guarantee every pages
unreserving of highatomic pageblock because it just release *a*
pageblock which could have few free pages so other context could
steal it easily so that the process stucked with direct reclaim
finally can encounter OOM although there are free pages which can
be unreserved.

This patch changes the logic so that it unreserves pageblocks with
no_progress_loop proportionally. IOW, in first retrial of reclaim,
it will try to unreserve a pageblock. In second retrial of reclaim,
it will try to unreserve 1/MAX_RECLAIM_RETRIES * reserved_pageblock
and finally all reserved pageblock before the OOM.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/page_alloc.c | 57 ++++++++++++++++++++++++++++++++++++++++++++-------------
 1 file changed, 44 insertions(+), 13 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d110cd640264..eeb047bb0e9d 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -71,6 +71,12 @@
 #include <asm/div64.h>
 #include "internal.h"
 
+/*
+ * Maximum number of reclaim retries without any progress before OOM killer
+ * is consider as the only way to move forward.
+ */
+#define MAX_RECLAIM_RETRIES 16
+
 /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
 static DEFINE_MUTEX(pcp_batch_high_lock);
 #define MIN_PERCPU_PAGELIST_FRACTION	(8)
@@ -2107,7 +2113,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
  * intense memory pressure but failed atomic allocations should be easier
  * to recover from than an OOM.
  */
-static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
+static int unreserve_highatomic_pageblock(const struct alloc_context *ac,
+						int no_progress_loops)
 {
 	struct zonelist *zonelist = ac->zonelist;
 	unsigned long flags;
@@ -2115,15 +2122,40 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 	struct zone *zone;
 	struct page *page;
 	int order;
+	int unreserved_pages = 0;
 
 	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
 								ac->nodemask) {
-		/* Preserve at least one pageblock */
-		if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
+		unsigned long unreserve_pages_max;
+
+		/*
+		 * Try to preserve at least one pageblock but use up before
+		 * OOM kill.
+		 */
+		if (no_progress_loops < MAX_RECLAIM_RETRIES &&
+			zone->nr_reserved_highatomic <= pageblock_nr_pages)
 			continue;
 
 		spin_lock_irqsave(&zone->lock, flags);
-		for (order = 0; order < MAX_ORDER; order++) {
+		if (no_progress_loops < MAX_RECLAIM_RETRIES) {
+			unreserve_pages_max = no_progress_loops *
+					zone->nr_reserved_highatomic /
+					MAX_RECLAIM_RETRIES;
+			unreserve_pages_max = max(unreserve_pages_max,
+						pageblock_nr_pages);
+		} else {
+			/*
+			 * By race with page free functions, !highatomic
+			 * pageblocks can have a free page in highatomic
+			 * migratetype free list. So if we are about to
+			 * kill some process, unreserve every free pages
+			 * in highorderatomic.
+			 */
+			unreserve_pages_max = -1UL;
+		}
+
+		for (order = 0; order < MAX_ORDER &&
+				unreserve_pages_max > 0; order++) {
 			struct free_area *area = &(zone->free_area[order]);
 
 			page = list_first_entry_or_null(
@@ -2151,6 +2183,9 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 				zone->nr_reserved_highatomic -= min(
 						pageblock_nr_pages,
 						zone->nr_reserved_highatomic);
+				unreserve_pages_max -= min(pageblock_nr_pages,
+					zone->nr_reserved_highatomic);
+				unreserved_pages += 1 << page_order(page);
 			}
 
 			/*
@@ -2164,11 +2199,11 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 			 */
 			set_pageblock_migratetype(page, ac->migratetype);
 			move_freepages_block(zone, page, ac->migratetype);
-			spin_unlock_irqrestore(&zone->lock, flags);
-			return;
 		}
 		spin_unlock_irqrestore(&zone->lock, flags);
 	}
+
+	return unreserved_pages;
 }
 
 /* Remove an element from the buddy allocator from the fallback list */
@@ -3370,7 +3405,6 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
 	 * Shrink them them and try again
 	 */
 	if (!page && !drained) {
-		unreserve_highatomic_pageblock(ac);
 		drain_all_pages(NULL);
 		drained = true;
 		goto retry;
@@ -3449,12 +3483,6 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
 }
 
 /*
- * Maximum number of reclaim retries without any progress before OOM killer
- * is consider as the only way to move forward.
- */
-#define MAX_RECLAIM_RETRIES 16
-
-/*
  * Checks whether it makes sense to retry the reclaim to make a forward progress
  * for the given allocation request.
  * The reclaim feedback represented by did_some_progress (any progress during
@@ -3490,6 +3518,9 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 	if (*no_progress_loops > MAX_RECLAIM_RETRIES)
 		return false;
 
+	if (unreserve_highatomic_pageblock(ac, *no_progress_loops))
+		return true;
+
 	/*
 	 * Keep reclaiming pages while there is a chance this will lead
 	 * somewhere.  If none of the target zones can satisfy our allocation
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
@ 2016-10-07  5:45   ` Minchan Kim
  0 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-07  5:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee, Minchan Kim

After fixing the race of highatomic page count, I still encounter
OOM with many free memory reserved as highatomic.

One of reason in my testing was we unreserve free pages only if
reclaim has progress. Otherwise, we cannot have chance to unreseve.

Other problem after fixing it was it doesn't guarantee every pages
unreserving of highatomic pageblock because it just release *a*
pageblock which could have few free pages so other context could
steal it easily so that the process stucked with direct reclaim
finally can encounter OOM although there are free pages which can
be unreserved.

This patch changes the logic so that it unreserves pageblocks with
no_progress_loop proportionally. IOW, in first retrial of reclaim,
it will try to unreserve a pageblock. In second retrial of reclaim,
it will try to unreserve 1/MAX_RECLAIM_RETRIES * reserved_pageblock
and finally all reserved pageblock before the OOM.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/page_alloc.c | 57 ++++++++++++++++++++++++++++++++++++++++++++-------------
 1 file changed, 44 insertions(+), 13 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d110cd640264..eeb047bb0e9d 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -71,6 +71,12 @@
 #include <asm/div64.h>
 #include "internal.h"
 
+/*
+ * Maximum number of reclaim retries without any progress before OOM killer
+ * is consider as the only way to move forward.
+ */
+#define MAX_RECLAIM_RETRIES 16
+
 /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
 static DEFINE_MUTEX(pcp_batch_high_lock);
 #define MIN_PERCPU_PAGELIST_FRACTION	(8)
@@ -2107,7 +2113,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
  * intense memory pressure but failed atomic allocations should be easier
  * to recover from than an OOM.
  */
-static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
+static int unreserve_highatomic_pageblock(const struct alloc_context *ac,
+						int no_progress_loops)
 {
 	struct zonelist *zonelist = ac->zonelist;
 	unsigned long flags;
@@ -2115,15 +2122,40 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 	struct zone *zone;
 	struct page *page;
 	int order;
+	int unreserved_pages = 0;
 
 	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
 								ac->nodemask) {
-		/* Preserve at least one pageblock */
-		if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
+		unsigned long unreserve_pages_max;
+
+		/*
+		 * Try to preserve at least one pageblock but use up before
+		 * OOM kill.
+		 */
+		if (no_progress_loops < MAX_RECLAIM_RETRIES &&
+			zone->nr_reserved_highatomic <= pageblock_nr_pages)
 			continue;
 
 		spin_lock_irqsave(&zone->lock, flags);
-		for (order = 0; order < MAX_ORDER; order++) {
+		if (no_progress_loops < MAX_RECLAIM_RETRIES) {
+			unreserve_pages_max = no_progress_loops *
+					zone->nr_reserved_highatomic /
+					MAX_RECLAIM_RETRIES;
+			unreserve_pages_max = max(unreserve_pages_max,
+						pageblock_nr_pages);
+		} else {
+			/*
+			 * By race with page free functions, !highatomic
+			 * pageblocks can have a free page in highatomic
+			 * migratetype free list. So if we are about to
+			 * kill some process, unreserve every free pages
+			 * in highorderatomic.
+			 */
+			unreserve_pages_max = -1UL;
+		}
+
+		for (order = 0; order < MAX_ORDER &&
+				unreserve_pages_max > 0; order++) {
 			struct free_area *area = &(zone->free_area[order]);
 
 			page = list_first_entry_or_null(
@@ -2151,6 +2183,9 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 				zone->nr_reserved_highatomic -= min(
 						pageblock_nr_pages,
 						zone->nr_reserved_highatomic);
+				unreserve_pages_max -= min(pageblock_nr_pages,
+					zone->nr_reserved_highatomic);
+				unreserved_pages += 1 << page_order(page);
 			}
 
 			/*
@@ -2164,11 +2199,11 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 			 */
 			set_pageblock_migratetype(page, ac->migratetype);
 			move_freepages_block(zone, page, ac->migratetype);
-			spin_unlock_irqrestore(&zone->lock, flags);
-			return;
 		}
 		spin_unlock_irqrestore(&zone->lock, flags);
 	}
+
+	return unreserved_pages;
 }
 
 /* Remove an element from the buddy allocator from the fallback list */
@@ -3370,7 +3405,6 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
 	 * Shrink them them and try again
 	 */
 	if (!page && !drained) {
-		unreserve_highatomic_pageblock(ac);
 		drain_all_pages(NULL);
 		drained = true;
 		goto retry;
@@ -3449,12 +3483,6 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
 }
 
 /*
- * Maximum number of reclaim retries without any progress before OOM killer
- * is consider as the only way to move forward.
- */
-#define MAX_RECLAIM_RETRIES 16
-
-/*
  * Checks whether it makes sense to retry the reclaim to make a forward progress
  * for the given allocation request.
  * The reclaim feedback represented by did_some_progress (any progress during
@@ -3490,6 +3518,9 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 	if (*no_progress_loops > MAX_RECLAIM_RETRIES)
 		return false;
 
+	if (unreserve_highatomic_pageblock(ac, *no_progress_loops))
+		return true;
+
 	/*
 	 * Keep reclaiming pages while there is a chance this will lead
 	 * somewhere.  If none of the target zones can satisfy our allocation
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH 4/4] mm: skip to reserve pageblock crossed zone boundary for HIGHATOMIC
  2016-10-07  5:45 ` Minchan Kim
@ 2016-10-07  5:45   ` Minchan Kim
  -1 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-07  5:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee, Minchan Kim

In CONFIG_SPARSEMEM, VM shares a pageblock_flags of a mem_section
between two zones if the pageblock cross zone boundaries. It means
a zone lock cannot protect pageblock migratype change's race.

It might be not a problem because migratetype inherently was racy
but intrdocuing with CMA, it was not true any more and have been fixed.
(I hope it should be solved more general approach however...)
And then, it's time for MIGRATE_HIGHATOMIC.

More importantly, HIGHATOMIC migratetype is not big(i.e., 1%) reserve
in system so let's skip such crippled pageblock to try to reserve
full 1% free memory.

Debugged-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/page_alloc.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index eeb047bb0e9d..d76bb50baf61 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2098,6 +2098,24 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
 	mt = get_pageblock_migratetype(page);
 	if (mt != MIGRATE_HIGHATOMIC &&
 			!is_migrate_isolate(mt) && !is_migrate_cma(mt)) {
+		/*
+		 * If the pageblock cross zone boundaries, we need both
+		 * zone locks but doesn't want to make complex because
+		 * highatomic pageblock is small so that we want to reserve
+		 * sane(?) pageblock.
+		 */
+		unsigned long start_pfn, end_pfn;
+
+		start_pfn = page_to_pfn(page);
+		start_pfn = start_pfn & ~(pageblock_nr_pages - 1);
+
+		if (!zone_spans_pfn(zone, start_pfn))
+			goto out_unlock;
+
+		end_pfn = start_pfn + pageblock_nr_pages - 1;
+		if (!zone_spans_pfn(zone, end_pfn))
+			goto out_unlock;
+
 		zone->nr_reserved_highatomic += pageblock_nr_pages;
 		set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC);
 		move_freepages_block(zone, page, MIGRATE_HIGHATOMIC);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* [PATCH 4/4] mm: skip to reserve pageblock crossed zone boundary for HIGHATOMIC
@ 2016-10-07  5:45   ` Minchan Kim
  0 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-07  5:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Vlastimil Babka, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee, Minchan Kim

In CONFIG_SPARSEMEM, VM shares a pageblock_flags of a mem_section
between two zones if the pageblock cross zone boundaries. It means
a zone lock cannot protect pageblock migratype change's race.

It might be not a problem because migratetype inherently was racy
but intrdocuing with CMA, it was not true any more and have been fixed.
(I hope it should be solved more general approach however...)
And then, it's time for MIGRATE_HIGHATOMIC.

More importantly, HIGHATOMIC migratetype is not big(i.e., 1%) reserve
in system so let's skip such crippled pageblock to try to reserve
full 1% free memory.

Debugged-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/page_alloc.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index eeb047bb0e9d..d76bb50baf61 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2098,6 +2098,24 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
 	mt = get_pageblock_migratetype(page);
 	if (mt != MIGRATE_HIGHATOMIC &&
 			!is_migrate_isolate(mt) && !is_migrate_cma(mt)) {
+		/*
+		 * If the pageblock cross zone boundaries, we need both
+		 * zone locks but doesn't want to make complex because
+		 * highatomic pageblock is small so that we want to reserve
+		 * sane(?) pageblock.
+		 */
+		unsigned long start_pfn, end_pfn;
+
+		start_pfn = page_to_pfn(page);
+		start_pfn = start_pfn & ~(pageblock_nr_pages - 1);
+
+		if (!zone_spans_pfn(zone, start_pfn))
+			goto out_unlock;
+
+		end_pfn = start_pfn + pageblock_nr_pages - 1;
+		if (!zone_spans_pfn(zone, end_pfn))
+			goto out_unlock;
+
 		zone->nr_reserved_highatomic += pageblock_nr_pages;
 		set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC);
 		move_freepages_block(zone, page, MIGRATE_HIGHATOMIC);
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
  2016-10-07  5:45   ` Minchan Kim
@ 2016-10-07  9:09     ` Michal Hocko
  -1 siblings, 0 replies; 56+ messages in thread
From: Michal Hocko @ 2016-10-07  9:09 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Fri 07-10-16 14:45:35, Minchan Kim wrote:
> After fixing the race of highatomic page count, I still encounter
> OOM with many free memory reserved as highatomic.
> 
> One of reason in my testing was we unreserve free pages only if
> reclaim has progress. Otherwise, we cannot have chance to unreseve.
> 
> Other problem after fixing it was it doesn't guarantee every pages
> unreserving of highatomic pageblock because it just release *a*
> pageblock which could have few free pages so other context could
> steal it easily so that the process stucked with direct reclaim
> finally can encounter OOM although there are free pages which can
> be unreserved.
> 
> This patch changes the logic so that it unreserves pageblocks with
> no_progress_loop proportionally. IOW, in first retrial of reclaim,
> it will try to unreserve a pageblock. In second retrial of reclaim,
> it will try to unreserve 1/MAX_RECLAIM_RETRIES * reserved_pageblock
> and finally all reserved pageblock before the OOM.
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  mm/page_alloc.c | 57 ++++++++++++++++++++++++++++++++++++++++++++-------------
>  1 file changed, 44 insertions(+), 13 deletions(-)

This sounds much more complex then it needs to be IMHO. Why something as
simple as thhe following wouldn't work? Please note that I even didn't
try to compile this. It is just give you an idea.
---
 mm/page_alloc.c | 26 ++++++++++++++++++++------
 1 file changed, 20 insertions(+), 6 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 73f60ad6315f..e575a4f38555 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2056,7 +2056,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
  * intense memory pressure but failed atomic allocations should be easier
  * to recover from than an OOM.
  */
-static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
+static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
+		bool force)
 {
 	struct zonelist *zonelist = ac->zonelist;
 	unsigned long flags;
@@ -2067,8 +2068,14 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 
 	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
 								ac->nodemask) {
-		/* Preserve at least one pageblock */
-		if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
+		if (!zone->nr_reserved_highatomic)
+			continue;
+
+		/*
+		 * Preserve at least one pageblock unless we are really running
+		 * out of memory
+		 */
+		if (!force && zone->nr_reserved_highatomic <= pageblock_nr_pages)
 			continue;
 
 		spin_lock_irqsave(&zone->lock, flags);
@@ -2102,10 +2109,12 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 			set_pageblock_migratetype(page, ac->migratetype);
 			move_freepages_block(zone, page, ac->migratetype);
 			spin_unlock_irqrestore(&zone->lock, flags);
-			return;
+			return true;
 		}
 		spin_unlock_irqrestore(&zone->lock, flags);
 	}
+
+	return false;
 }
 
 /* Remove an element from the buddy allocator from the fallback list */
@@ -3302,7 +3311,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
 	 * Shrink them them and try again
 	 */
 	if (!page && !drained) {
-		unreserve_highatomic_pageblock(ac);
+		unreserve_highatomic_pageblock(ac, false);
 		drain_all_pages(NULL);
 		drained = true;
 		goto retry;
@@ -3418,9 +3427,14 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 	/*
 	 * Make sure we converge to OOM if we cannot make any progress
 	 * several times in the row.
+	 * Do last desparate attempt to throw high atomic reserves away
+	 * before we give up
 	 */
-	if (*no_progress_loops > MAX_RECLAIM_RETRIES)
+	if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
+		if (unreserve_highatomic_pageblock(ac, true))
+			return true;
 		return false;
+	}
 
 	/*
 	 * Keep reclaiming pages while there is a chance this will lead
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
@ 2016-10-07  9:09     ` Michal Hocko
  0 siblings, 0 replies; 56+ messages in thread
From: Michal Hocko @ 2016-10-07  9:09 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Fri 07-10-16 14:45:35, Minchan Kim wrote:
> After fixing the race of highatomic page count, I still encounter
> OOM with many free memory reserved as highatomic.
> 
> One of reason in my testing was we unreserve free pages only if
> reclaim has progress. Otherwise, we cannot have chance to unreseve.
> 
> Other problem after fixing it was it doesn't guarantee every pages
> unreserving of highatomic pageblock because it just release *a*
> pageblock which could have few free pages so other context could
> steal it easily so that the process stucked with direct reclaim
> finally can encounter OOM although there are free pages which can
> be unreserved.
> 
> This patch changes the logic so that it unreserves pageblocks with
> no_progress_loop proportionally. IOW, in first retrial of reclaim,
> it will try to unreserve a pageblock. In second retrial of reclaim,
> it will try to unreserve 1/MAX_RECLAIM_RETRIES * reserved_pageblock
> and finally all reserved pageblock before the OOM.
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  mm/page_alloc.c | 57 ++++++++++++++++++++++++++++++++++++++++++++-------------
>  1 file changed, 44 insertions(+), 13 deletions(-)

This sounds much more complex then it needs to be IMHO. Why something as
simple as thhe following wouldn't work? Please note that I even didn't
try to compile this. It is just give you an idea.
---
 mm/page_alloc.c | 26 ++++++++++++++++++++------
 1 file changed, 20 insertions(+), 6 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 73f60ad6315f..e575a4f38555 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2056,7 +2056,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
  * intense memory pressure but failed atomic allocations should be easier
  * to recover from than an OOM.
  */
-static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
+static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
+		bool force)
 {
 	struct zonelist *zonelist = ac->zonelist;
 	unsigned long flags;
@@ -2067,8 +2068,14 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 
 	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
 								ac->nodemask) {
-		/* Preserve at least one pageblock */
-		if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
+		if (!zone->nr_reserved_highatomic)
+			continue;
+
+		/*
+		 * Preserve at least one pageblock unless we are really running
+		 * out of memory
+		 */
+		if (!force && zone->nr_reserved_highatomic <= pageblock_nr_pages)
 			continue;
 
 		spin_lock_irqsave(&zone->lock, flags);
@@ -2102,10 +2109,12 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 			set_pageblock_migratetype(page, ac->migratetype);
 			move_freepages_block(zone, page, ac->migratetype);
 			spin_unlock_irqrestore(&zone->lock, flags);
-			return;
+			return true;
 		}
 		spin_unlock_irqrestore(&zone->lock, flags);
 	}
+
+	return false;
 }
 
 /* Remove an element from the buddy allocator from the fallback list */
@@ -3302,7 +3311,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
 	 * Shrink them them and try again
 	 */
 	if (!page && !drained) {
-		unreserve_highatomic_pageblock(ac);
+		unreserve_highatomic_pageblock(ac, false);
 		drain_all_pages(NULL);
 		drained = true;
 		goto retry;
@@ -3418,9 +3427,14 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 	/*
 	 * Make sure we converge to OOM if we cannot make any progress
 	 * several times in the row.
+	 * Do last desparate attempt to throw high atomic reserves away
+	 * before we give up
 	 */
-	if (*no_progress_loops > MAX_RECLAIM_RETRIES)
+	if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
+		if (unreserve_highatomic_pageblock(ac, true))
+			return true;
 		return false;
+	}
 
 	/*
 	 * Keep reclaiming pages while there is a chance this will lead
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* Re: [PATCH 0/4] use up highorder free pages before OOM
  2016-10-07  5:45 ` Minchan Kim
@ 2016-10-07  9:16   ` Michal Hocko
  -1 siblings, 0 replies; 56+ messages in thread
From: Michal Hocko @ 2016-10-07  9:16 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Fri 07-10-16 14:45:32, Minchan Kim wrote:
> I got OOM report from production team with v4.4 kernel.
> It has enough free memory but failed to allocate order-0 page and
> finally encounter OOM kill.
> I could reproduce it with my test easily. Look at below.
> The reason is free pages(19M) of DMA32 zone are reserved for
> HIGHORDERATOMIC and doesn't unreserved before the OOM.

Is this really reproducible?

[...]
> active_anon:383949 inactive_anon:106724 isolated_anon:0
>  active_file:15 inactive_file:44 isolated_file:0
>  unevictable:0 dirty:0 writeback:24 unstable:0
>  slab_reclaimable:2483 slab_unreclaimable:3326
>  mapped:0 shmem:0 pagetables:1906 bounce:0
>  free:6898 free_pcp:291 free_cma:0
[...]
> Free swap  = 8kB
> Total swap = 255996kB
> 524158 pages RAM
> 0 pages HighMem/MovableOnly
> 12658 pages reserved
> 0 pages cma reserved
> 0 pages hwpoisoned

>From the above you can see that you are pretty much out of memory. There
is basically no pagecache to reclaim and your anon memory is not 
reclaimable either because the swap is basically full. It is true that 
the high atomic reserves consume 19MB which could be reused but this 
less than 1%, especially when you compare that to the amount of reserved
memory.

So while I do agree that potential issues - misaccounting and others you
are addressing in the follow up patch - are good to fix but I believe that
draining last 19M is not something that would reliably get you over the
edge. Your workload (93% of memory sitting on anon LRU with swap full)
simply doesn't fit into the amount of memory you have available.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 0/4] use up highorder free pages before OOM
@ 2016-10-07  9:16   ` Michal Hocko
  0 siblings, 0 replies; 56+ messages in thread
From: Michal Hocko @ 2016-10-07  9:16 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Fri 07-10-16 14:45:32, Minchan Kim wrote:
> I got OOM report from production team with v4.4 kernel.
> It has enough free memory but failed to allocate order-0 page and
> finally encounter OOM kill.
> I could reproduce it with my test easily. Look at below.
> The reason is free pages(19M) of DMA32 zone are reserved for
> HIGHORDERATOMIC and doesn't unreserved before the OOM.

Is this really reproducible?

[...]
> active_anon:383949 inactive_anon:106724 isolated_anon:0
>  active_file:15 inactive_file:44 isolated_file:0
>  unevictable:0 dirty:0 writeback:24 unstable:0
>  slab_reclaimable:2483 slab_unreclaimable:3326
>  mapped:0 shmem:0 pagetables:1906 bounce:0
>  free:6898 free_pcp:291 free_cma:0
[...]
> Free swap  = 8kB
> Total swap = 255996kB
> 524158 pages RAM
> 0 pages HighMem/MovableOnly
> 12658 pages reserved
> 0 pages cma reserved
> 0 pages hwpoisoned

>From the above you can see that you are pretty much out of memory. There
is basically no pagecache to reclaim and your anon memory is not 
reclaimable either because the swap is basically full. It is true that 
the high atomic reserves consume 19MB which could be reused but this 
less than 1%, especially when you compare that to the amount of reserved
memory.

So while I do agree that potential issues - misaccounting and others you
are addressing in the follow up patch - are good to fix but I believe that
draining last 19M is not something that would reliably get you over the
edge. Your workload (93% of memory sitting on anon LRU with swap full)
simply doesn't fit into the amount of memory you have available.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/4] mm: adjust reserved highatomic count
  2016-10-07  5:45   ` Minchan Kim
@ 2016-10-07 12:30     ` Vlastimil Babka
  -1 siblings, 0 replies; 56+ messages in thread
From: Vlastimil Babka @ 2016-10-07 12:30 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: Mel Gorman, Joonsoo Kim, linux-kernel, linux-mm, Sangseok Lee

On 10/07/2016 07:45 AM, Minchan Kim wrote:
> In page freeing path, migratetype is racy so that a highorderatomic
> page could free into non-highorderatomic free list.

Yes. If page from a pageblock went to a pcplist before that pageblock 
was reserved as highatomic, free_pcppages_bulk() will misplace it.

> If that page
> is allocated, VM can change the pageblock from higorderatomic to
> something.

More specifically, steal_suitable_fallback(). Yes.

> In that case, we should adjust nr_reserved_highatomic.
> Otherwise, VM cannot reserve highorderatomic pageblocks any more
> although it doesn't reach 1% limit. It means highorder atomic
> allocation failure would be higher.
>
> So, this patch decreases the account as well as migratetype
> if it was MIGRATE_HIGHATOMIC.
>
> Signed-off-by: Minchan Kim <minchan@kernel.org>

Hm wouldn't it be simpler just to prevent the pageblock's migratetype to 
be changed if it's highatomic? Possibly also not do 
move_freepages_block() in that case. Most accurate would be to put such 
misplaced page on the proper freelist and retry the fallback, but that 
might be overkill.

> ---
>  mm/page_alloc.c | 44 ++++++++++++++++++++++++++++++++++++++------
>  1 file changed, 38 insertions(+), 6 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 55ad0229ebf3..e7cbb3cc22fa 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -282,6 +282,9 @@ EXPORT_SYMBOL(nr_node_ids);
>  EXPORT_SYMBOL(nr_online_nodes);
>  #endif
>
> +static void dec_highatomic_pageblock(struct zone *zone, struct page *page,
> +					int migratetype);
> +
>  int page_group_by_mobility_disabled __read_mostly;
>
>  #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
> @@ -1935,7 +1938,14 @@ static void change_pageblock_range(struct page *pageblock_page,
>  	int nr_pageblocks = 1 << (start_order - pageblock_order);
>
>  	while (nr_pageblocks--) {
> -		set_pageblock_migratetype(pageblock_page, migratetype);
> +		if (get_pageblock_migratetype(pageblock_page) !=
> +			MIGRATE_HIGHATOMIC)
> +			set_pageblock_migratetype(pageblock_page,
> +							migratetype);
> +		else
> +			dec_highatomic_pageblock(page_zone(pageblock_page),
> +							pageblock_page,
> +							migratetype);
>  		pageblock_page += pageblock_nr_pages;
>  	}
>  }
> @@ -1996,8 +2006,14 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>
>  	/* Claim the whole block if over half of it is free */
>  	if (pages >= (1 << (pageblock_order-1)) ||
> -			page_group_by_mobility_disabled)
> -		set_pageblock_migratetype(page, start_type);
> +			page_group_by_mobility_disabled) {
> +		int mt = get_pageblock_migratetype(page);
> +
> +		if (mt != MIGRATE_HIGHATOMIC)
> +			set_pageblock_migratetype(page, start_type);
> +		else
> +			dec_highatomic_pageblock(zone, page, start_type);
> +	}
>  }
>
>  /*
> @@ -2037,6 +2053,17 @@ int find_suitable_fallback(struct free_area *area, unsigned int order,
>  	return -1;
>  }
>
> +static void dec_highatomic_pageblock(struct zone *zone, struct page *page,
> +					int migratetype)
> +{
> +	if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
> +		return;
> +
> +	zone->nr_reserved_highatomic -= min(pageblock_nr_pages,
> +					zone->nr_reserved_highatomic);
> +	set_pageblock_migratetype(page, migratetype);
> +}
> +
>  /*
>   * Reserve a pageblock for exclusive use of high-order atomic allocations if
>   * there are no empty page blocks that contain a page with a suitable order
> @@ -2555,9 +2582,14 @@ int __isolate_free_page(struct page *page, unsigned int order)
>  		struct page *endpage = page + (1 << order) - 1;
>  		for (; page < endpage; page += pageblock_nr_pages) {
>  			int mt = get_pageblock_migratetype(page);
> -			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt))
> -				set_pageblock_migratetype(page,
> -							  MIGRATE_MOVABLE);
> +			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt)) {
> +				if (mt != MIGRATE_HIGHATOMIC)
> +					set_pageblock_migratetype(page,
> +							MIGRATE_MOVABLE);
> +				else
> +					dec_highatomic_pageblock(zone, page,
> +							MIGRATE_MOVABLE);
> +			}
>  		}
>  	}
>
>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/4] mm: adjust reserved highatomic count
@ 2016-10-07 12:30     ` Vlastimil Babka
  0 siblings, 0 replies; 56+ messages in thread
From: Vlastimil Babka @ 2016-10-07 12:30 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: Mel Gorman, Joonsoo Kim, linux-kernel, linux-mm, Sangseok Lee

On 10/07/2016 07:45 AM, Minchan Kim wrote:
> In page freeing path, migratetype is racy so that a highorderatomic
> page could free into non-highorderatomic free list.

Yes. If page from a pageblock went to a pcplist before that pageblock 
was reserved as highatomic, free_pcppages_bulk() will misplace it.

> If that page
> is allocated, VM can change the pageblock from higorderatomic to
> something.

More specifically, steal_suitable_fallback(). Yes.

> In that case, we should adjust nr_reserved_highatomic.
> Otherwise, VM cannot reserve highorderatomic pageblocks any more
> although it doesn't reach 1% limit. It means highorder atomic
> allocation failure would be higher.
>
> So, this patch decreases the account as well as migratetype
> if it was MIGRATE_HIGHATOMIC.
>
> Signed-off-by: Minchan Kim <minchan@kernel.org>

Hm wouldn't it be simpler just to prevent the pageblock's migratetype to 
be changed if it's highatomic? Possibly also not do 
move_freepages_block() in that case. Most accurate would be to put such 
misplaced page on the proper freelist and retry the fallback, but that 
might be overkill.

> ---
>  mm/page_alloc.c | 44 ++++++++++++++++++++++++++++++++++++++------
>  1 file changed, 38 insertions(+), 6 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 55ad0229ebf3..e7cbb3cc22fa 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -282,6 +282,9 @@ EXPORT_SYMBOL(nr_node_ids);
>  EXPORT_SYMBOL(nr_online_nodes);
>  #endif
>
> +static void dec_highatomic_pageblock(struct zone *zone, struct page *page,
> +					int migratetype);
> +
>  int page_group_by_mobility_disabled __read_mostly;
>
>  #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
> @@ -1935,7 +1938,14 @@ static void change_pageblock_range(struct page *pageblock_page,
>  	int nr_pageblocks = 1 << (start_order - pageblock_order);
>
>  	while (nr_pageblocks--) {
> -		set_pageblock_migratetype(pageblock_page, migratetype);
> +		if (get_pageblock_migratetype(pageblock_page) !=
> +			MIGRATE_HIGHATOMIC)
> +			set_pageblock_migratetype(pageblock_page,
> +							migratetype);
> +		else
> +			dec_highatomic_pageblock(page_zone(pageblock_page),
> +							pageblock_page,
> +							migratetype);
>  		pageblock_page += pageblock_nr_pages;
>  	}
>  }
> @@ -1996,8 +2006,14 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>
>  	/* Claim the whole block if over half of it is free */
>  	if (pages >= (1 << (pageblock_order-1)) ||
> -			page_group_by_mobility_disabled)
> -		set_pageblock_migratetype(page, start_type);
> +			page_group_by_mobility_disabled) {
> +		int mt = get_pageblock_migratetype(page);
> +
> +		if (mt != MIGRATE_HIGHATOMIC)
> +			set_pageblock_migratetype(page, start_type);
> +		else
> +			dec_highatomic_pageblock(zone, page, start_type);
> +	}
>  }
>
>  /*
> @@ -2037,6 +2053,17 @@ int find_suitable_fallback(struct free_area *area, unsigned int order,
>  	return -1;
>  }
>
> +static void dec_highatomic_pageblock(struct zone *zone, struct page *page,
> +					int migratetype)
> +{
> +	if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
> +		return;
> +
> +	zone->nr_reserved_highatomic -= min(pageblock_nr_pages,
> +					zone->nr_reserved_highatomic);
> +	set_pageblock_migratetype(page, migratetype);
> +}
> +
>  /*
>   * Reserve a pageblock for exclusive use of high-order atomic allocations if
>   * there are no empty page blocks that contain a page with a suitable order
> @@ -2555,9 +2582,14 @@ int __isolate_free_page(struct page *page, unsigned int order)
>  		struct page *endpage = page + (1 << order) - 1;
>  		for (; page < endpage; page += pageblock_nr_pages) {
>  			int mt = get_pageblock_migratetype(page);
> -			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt))
> -				set_pageblock_migratetype(page,
> -							  MIGRATE_MOVABLE);
> +			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt)) {
> +				if (mt != MIGRATE_HIGHATOMIC)
> +					set_pageblock_migratetype(page,
> +							MIGRATE_MOVABLE);
> +				else
> +					dec_highatomic_pageblock(zone, page,
> +							MIGRATE_MOVABLE);
> +			}
>  		}
>  	}
>
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 2/4] mm: prevent double decrease of nr_reserved_highatomic
  2016-10-07  5:45   ` Minchan Kim
@ 2016-10-07 12:44     ` Vlastimil Babka
  -1 siblings, 0 replies; 56+ messages in thread
From: Vlastimil Babka @ 2016-10-07 12:44 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: Mel Gorman, Joonsoo Kim, linux-kernel, linux-mm, Sangseok Lee

On 10/07/2016 07:45 AM, Minchan Kim wrote:
> There is race between page freeing and unreserved highatomic.
>
>  CPU 0				    CPU 1
>
>     free_hot_cold_page
>       mt = get_pfnblock_migratetype

so here mt == MIGRATE_HIGHATOMIC?

>       set_pcppage_migratetype(page, mt)
>     				    unreserve_highatomic_pageblock
>     				    spin_lock_irqsave(&zone->lock)
>     				    move_freepages_block
>     				    set_pageblock_migratetype(page)
>     				    spin_unlock_irqrestore(&zone->lock)
>       free_pcppages_bulk
>         __free_one_page(mt) <- mt is stale
>
> By above race, a page on CPU 0 could go non-highorderatomic free list
> since the pageblock's type is changed.
> By that, unreserve logic of
> highorderatomic can decrease reserved count on a same pageblock
> several times and then it will make mismatch between
> nr_reserved_highatomic and the number of reserved pageblock.

Hmm I see.

> So, this patch verifies whether the pageblock is highatomic or not
> and decrease the count only if the pageblock is highatomic.

Yeah I guess that's the easiest solution.

> Signed-off-by: Minchan Kim <minchan@kernel.org>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  mm/page_alloc.c | 24 ++++++++++++++++++------
>  1 file changed, 18 insertions(+), 6 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index e7cbb3cc22fa..d110cd640264 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2133,13 +2133,25 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  				continue;
>
>  			/*
> -			 * It should never happen but changes to locking could
> -			 * inadvertently allow a per-cpu drain to add pages
> -			 * to MIGRATE_HIGHATOMIC while unreserving so be safe
> -			 * and watch for underflows.
> +			 * In page freeing path, migratetype change is racy so
> +			 * we can counter several free pages in a pageblock
> +			 * in this loop althoug we changed the pageblock type
> +			 * from highatomic to ac->migratetype. So we should
> +			 * adjust the count once.
>  			 */
> -			zone->nr_reserved_highatomic -= min(pageblock_nr_pages,
> -				zone->nr_reserved_highatomic);
> +			if (get_pageblock_migratetype(page) ==
> +							MIGRATE_HIGHATOMIC) {
> +				/*
> +				 * It should never happen but changes to
> +				 * locking could inadvertently allow a per-cpu
> +				 * drain to add pages to MIGRATE_HIGHATOMIC
> +				 * while unreserving so be safe and watch for
> +				 * underflows.
> +				 */
> +				zone->nr_reserved_highatomic -= min(
> +						pageblock_nr_pages,
> +						zone->nr_reserved_highatomic);
> +			}
>
>  			/*
>  			 * Convert to ac->migratetype and avoid the normal
>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 2/4] mm: prevent double decrease of nr_reserved_highatomic
@ 2016-10-07 12:44     ` Vlastimil Babka
  0 siblings, 0 replies; 56+ messages in thread
From: Vlastimil Babka @ 2016-10-07 12:44 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: Mel Gorman, Joonsoo Kim, linux-kernel, linux-mm, Sangseok Lee

On 10/07/2016 07:45 AM, Minchan Kim wrote:
> There is race between page freeing and unreserved highatomic.
>
>  CPU 0				    CPU 1
>
>     free_hot_cold_page
>       mt = get_pfnblock_migratetype

so here mt == MIGRATE_HIGHATOMIC?

>       set_pcppage_migratetype(page, mt)
>     				    unreserve_highatomic_pageblock
>     				    spin_lock_irqsave(&zone->lock)
>     				    move_freepages_block
>     				    set_pageblock_migratetype(page)
>     				    spin_unlock_irqrestore(&zone->lock)
>       free_pcppages_bulk
>         __free_one_page(mt) <- mt is stale
>
> By above race, a page on CPU 0 could go non-highorderatomic free list
> since the pageblock's type is changed.
> By that, unreserve logic of
> highorderatomic can decrease reserved count on a same pageblock
> several times and then it will make mismatch between
> nr_reserved_highatomic and the number of reserved pageblock.

Hmm I see.

> So, this patch verifies whether the pageblock is highatomic or not
> and decrease the count only if the pageblock is highatomic.

Yeah I guess that's the easiest solution.

> Signed-off-by: Minchan Kim <minchan@kernel.org>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  mm/page_alloc.c | 24 ++++++++++++++++++------
>  1 file changed, 18 insertions(+), 6 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index e7cbb3cc22fa..d110cd640264 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2133,13 +2133,25 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  				continue;
>
>  			/*
> -			 * It should never happen but changes to locking could
> -			 * inadvertently allow a per-cpu drain to add pages
> -			 * to MIGRATE_HIGHATOMIC while unreserving so be safe
> -			 * and watch for underflows.
> +			 * In page freeing path, migratetype change is racy so
> +			 * we can counter several free pages in a pageblock
> +			 * in this loop althoug we changed the pageblock type
> +			 * from highatomic to ac->migratetype. So we should
> +			 * adjust the count once.
>  			 */
> -			zone->nr_reserved_highatomic -= min(pageblock_nr_pages,
> -				zone->nr_reserved_highatomic);
> +			if (get_pageblock_migratetype(page) ==
> +							MIGRATE_HIGHATOMIC) {
> +				/*
> +				 * It should never happen but changes to
> +				 * locking could inadvertently allow a per-cpu
> +				 * drain to add pages to MIGRATE_HIGHATOMIC
> +				 * while unreserving so be safe and watch for
> +				 * underflows.
> +				 */
> +				zone->nr_reserved_highatomic -= min(
> +						pageblock_nr_pages,
> +						zone->nr_reserved_highatomic);
> +			}
>
>  			/*
>  			 * Convert to ac->migratetype and avoid the normal
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/4] mm: adjust reserved highatomic count
  2016-10-07 12:30     ` Vlastimil Babka
@ 2016-10-07 14:29       ` Minchan Kim
  -1 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-07 14:29 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Andrew Morton, Mel Gorman, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee

Hi Vlastimil,

On Fri, Oct 07, 2016 at 02:30:04PM +0200, Vlastimil Babka wrote:
> On 10/07/2016 07:45 AM, Minchan Kim wrote:
> >In page freeing path, migratetype is racy so that a highorderatomic
> >page could free into non-highorderatomic free list.
> 
> Yes. If page from a pageblock went to a pcplist before that pageblock was
> reserved as highatomic, free_pcppages_bulk() will misplace it.

As well, high-order freeing path has a problem, too.


    CPU 1                               CPU 2
    
                                        __free_pages_ok
                                        /* got highatomic mt */
    unreserve_highatomic_pageblock      mt = get_pfnblock_migratetype
    spin_lock_irqsave(&zone->lock);
    move_freepages_block
    /* change from highatomic to something
    set_pageblock_migratetype(page)
    spin_unlock_irqrestore(&zone->lock)
    
                                        spin_lock(&zone->lock);
                                        /* highatomic mt is stale */
                                        __free_one_page(page, mt);
 
Acutually, I tried to solve this problem with fixing the free path
but it needs to add a branch to verify highorderatomic mt in
both order-0 and high-order page freeing path. On highorder page freeing
path wouldn't be a problem but I don't want to add the branch in pcp
freeing path which is hot.

> 
> >If that page
> >is allocated, VM can change the pageblock from higorderatomic to
> >something.
> 
> More specifically, steal_suitable_fallback(). Yes.

As well, __isolate_free_page, too.

> 
> >In that case, we should adjust nr_reserved_highatomic.
> >Otherwise, VM cannot reserve highorderatomic pageblocks any more
> >although it doesn't reach 1% limit. It means highorder atomic
> >allocation failure would be higher.
> >
> >So, this patch decreases the account as well as migratetype
> >if it was MIGRATE_HIGHATOMIC.
> >
> >Signed-off-by: Minchan Kim <minchan@kernel.org>
> 
> Hm wouldn't it be simpler just to prevent the pageblock's migratetype to be
> changed if it's highatomic? Possibly also not do move_freepages_block() in

It could be. Actually, I did it with modifying can_steal_fallback which returns
false it found the pageblock is highorderatomic but changed to this way again
because I don't have any justification to prevent changing pageblock.
If you give concrete justification so others isn't against on it, I am happy to
do what you suggested.

> that case. Most accurate would be to put such misplaced page on the proper
> freelist and retry the fallback, but that might be overkill.
> 
> >---
> > mm/page_alloc.c | 44 ++++++++++++++++++++++++++++++++++++++------
> > 1 file changed, 38 insertions(+), 6 deletions(-)
> >
> >diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >index 55ad0229ebf3..e7cbb3cc22fa 100644
> >--- a/mm/page_alloc.c
> >+++ b/mm/page_alloc.c
> >@@ -282,6 +282,9 @@ EXPORT_SYMBOL(nr_node_ids);
> > EXPORT_SYMBOL(nr_online_nodes);
> > #endif
> >
> >+static void dec_highatomic_pageblock(struct zone *zone, struct page *page,
> >+					int migratetype);
> >+
> > int page_group_by_mobility_disabled __read_mostly;
> >
> > #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
> >@@ -1935,7 +1938,14 @@ static void change_pageblock_range(struct page *pageblock_page,
> > 	int nr_pageblocks = 1 << (start_order - pageblock_order);
> >
> > 	while (nr_pageblocks--) {
> >-		set_pageblock_migratetype(pageblock_page, migratetype);
> >+		if (get_pageblock_migratetype(pageblock_page) !=
> >+			MIGRATE_HIGHATOMIC)
> >+			set_pageblock_migratetype(pageblock_page,
> >+							migratetype);
> >+		else
> >+			dec_highatomic_pageblock(page_zone(pageblock_page),
> >+							pageblock_page,
> >+							migratetype);
> > 		pageblock_page += pageblock_nr_pages;
> > 	}
> > }
> >@@ -1996,8 +2006,14 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
> >
> > 	/* Claim the whole block if over half of it is free */
> > 	if (pages >= (1 << (pageblock_order-1)) ||
> >-			page_group_by_mobility_disabled)
> >-		set_pageblock_migratetype(page, start_type);
> >+			page_group_by_mobility_disabled) {
> >+		int mt = get_pageblock_migratetype(page);
> >+
> >+		if (mt != MIGRATE_HIGHATOMIC)
> >+			set_pageblock_migratetype(page, start_type);
> >+		else
> >+			dec_highatomic_pageblock(zone, page, start_type);
> >+	}
> > }
> >
> > /*
> >@@ -2037,6 +2053,17 @@ int find_suitable_fallback(struct free_area *area, unsigned int order,
> > 	return -1;
> > }
> >
> >+static void dec_highatomic_pageblock(struct zone *zone, struct page *page,
> >+					int migratetype)
> >+{
> >+	if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
> >+		return;
> >+
> >+	zone->nr_reserved_highatomic -= min(pageblock_nr_pages,
> >+					zone->nr_reserved_highatomic);
> >+	set_pageblock_migratetype(page, migratetype);
> >+}
> >+
> > /*
> >  * Reserve a pageblock for exclusive use of high-order atomic allocations if
> >  * there are no empty page blocks that contain a page with a suitable order
> >@@ -2555,9 +2582,14 @@ int __isolate_free_page(struct page *page, unsigned int order)
> > 		struct page *endpage = page + (1 << order) - 1;
> > 		for (; page < endpage; page += pageblock_nr_pages) {
> > 			int mt = get_pageblock_migratetype(page);
> >-			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt))
> >-				set_pageblock_migratetype(page,
> >-							  MIGRATE_MOVABLE);
> >+			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt)) {
> >+				if (mt != MIGRATE_HIGHATOMIC)
> >+					set_pageblock_migratetype(page,
> >+							MIGRATE_MOVABLE);
> >+				else
> >+					dec_highatomic_pageblock(zone, page,
> >+							MIGRATE_MOVABLE);
> >+			}
> > 		}
> > 	}
> >
> >
> 

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/4] mm: adjust reserved highatomic count
@ 2016-10-07 14:29       ` Minchan Kim
  0 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-07 14:29 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Andrew Morton, Mel Gorman, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee

Hi Vlastimil,

On Fri, Oct 07, 2016 at 02:30:04PM +0200, Vlastimil Babka wrote:
> On 10/07/2016 07:45 AM, Minchan Kim wrote:
> >In page freeing path, migratetype is racy so that a highorderatomic
> >page could free into non-highorderatomic free list.
> 
> Yes. If page from a pageblock went to a pcplist before that pageblock was
> reserved as highatomic, free_pcppages_bulk() will misplace it.

As well, high-order freeing path has a problem, too.


    CPU 1                               CPU 2
    
                                        __free_pages_ok
                                        /* got highatomic mt */
    unreserve_highatomic_pageblock      mt = get_pfnblock_migratetype
    spin_lock_irqsave(&zone->lock);
    move_freepages_block
    /* change from highatomic to something
    set_pageblock_migratetype(page)
    spin_unlock_irqrestore(&zone->lock)
    
                                        spin_lock(&zone->lock);
                                        /* highatomic mt is stale */
                                        __free_one_page(page, mt);
 
Acutually, I tried to solve this problem with fixing the free path
but it needs to add a branch to verify highorderatomic mt in
both order-0 and high-order page freeing path. On highorder page freeing
path wouldn't be a problem but I don't want to add the branch in pcp
freeing path which is hot.

> 
> >If that page
> >is allocated, VM can change the pageblock from higorderatomic to
> >something.
> 
> More specifically, steal_suitable_fallback(). Yes.

As well, __isolate_free_page, too.

> 
> >In that case, we should adjust nr_reserved_highatomic.
> >Otherwise, VM cannot reserve highorderatomic pageblocks any more
> >although it doesn't reach 1% limit. It means highorder atomic
> >allocation failure would be higher.
> >
> >So, this patch decreases the account as well as migratetype
> >if it was MIGRATE_HIGHATOMIC.
> >
> >Signed-off-by: Minchan Kim <minchan@kernel.org>
> 
> Hm wouldn't it be simpler just to prevent the pageblock's migratetype to be
> changed if it's highatomic? Possibly also not do move_freepages_block() in

It could be. Actually, I did it with modifying can_steal_fallback which returns
false it found the pageblock is highorderatomic but changed to this way again
because I don't have any justification to prevent changing pageblock.
If you give concrete justification so others isn't against on it, I am happy to
do what you suggested.

> that case. Most accurate would be to put such misplaced page on the proper
> freelist and retry the fallback, but that might be overkill.
> 
> >---
> > mm/page_alloc.c | 44 ++++++++++++++++++++++++++++++++++++++------
> > 1 file changed, 38 insertions(+), 6 deletions(-)
> >
> >diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >index 55ad0229ebf3..e7cbb3cc22fa 100644
> >--- a/mm/page_alloc.c
> >+++ b/mm/page_alloc.c
> >@@ -282,6 +282,9 @@ EXPORT_SYMBOL(nr_node_ids);
> > EXPORT_SYMBOL(nr_online_nodes);
> > #endif
> >
> >+static void dec_highatomic_pageblock(struct zone *zone, struct page *page,
> >+					int migratetype);
> >+
> > int page_group_by_mobility_disabled __read_mostly;
> >
> > #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
> >@@ -1935,7 +1938,14 @@ static void change_pageblock_range(struct page *pageblock_page,
> > 	int nr_pageblocks = 1 << (start_order - pageblock_order);
> >
> > 	while (nr_pageblocks--) {
> >-		set_pageblock_migratetype(pageblock_page, migratetype);
> >+		if (get_pageblock_migratetype(pageblock_page) !=
> >+			MIGRATE_HIGHATOMIC)
> >+			set_pageblock_migratetype(pageblock_page,
> >+							migratetype);
> >+		else
> >+			dec_highatomic_pageblock(page_zone(pageblock_page),
> >+							pageblock_page,
> >+							migratetype);
> > 		pageblock_page += pageblock_nr_pages;
> > 	}
> > }
> >@@ -1996,8 +2006,14 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
> >
> > 	/* Claim the whole block if over half of it is free */
> > 	if (pages >= (1 << (pageblock_order-1)) ||
> >-			page_group_by_mobility_disabled)
> >-		set_pageblock_migratetype(page, start_type);
> >+			page_group_by_mobility_disabled) {
> >+		int mt = get_pageblock_migratetype(page);
> >+
> >+		if (mt != MIGRATE_HIGHATOMIC)
> >+			set_pageblock_migratetype(page, start_type);
> >+		else
> >+			dec_highatomic_pageblock(zone, page, start_type);
> >+	}
> > }
> >
> > /*
> >@@ -2037,6 +2053,17 @@ int find_suitable_fallback(struct free_area *area, unsigned int order,
> > 	return -1;
> > }
> >
> >+static void dec_highatomic_pageblock(struct zone *zone, struct page *page,
> >+					int migratetype)
> >+{
> >+	if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
> >+		return;
> >+
> >+	zone->nr_reserved_highatomic -= min(pageblock_nr_pages,
> >+					zone->nr_reserved_highatomic);
> >+	set_pageblock_migratetype(page, migratetype);
> >+}
> >+
> > /*
> >  * Reserve a pageblock for exclusive use of high-order atomic allocations if
> >  * there are no empty page blocks that contain a page with a suitable order
> >@@ -2555,9 +2582,14 @@ int __isolate_free_page(struct page *page, unsigned int order)
> > 		struct page *endpage = page + (1 << order) - 1;
> > 		for (; page < endpage; page += pageblock_nr_pages) {
> > 			int mt = get_pageblock_migratetype(page);
> >-			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt))
> >-				set_pageblock_migratetype(page,
> >-							  MIGRATE_MOVABLE);
> >+			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt)) {
> >+				if (mt != MIGRATE_HIGHATOMIC)
> >+					set_pageblock_migratetype(page,
> >+							MIGRATE_MOVABLE);
> >+				else
> >+					dec_highatomic_pageblock(zone, page,
> >+							MIGRATE_MOVABLE);
> >+			}
> > 		}
> > 	}
> >
> >
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 2/4] mm: prevent double decrease of nr_reserved_highatomic
  2016-10-07 12:44     ` Vlastimil Babka
@ 2016-10-07 14:30       ` Minchan Kim
  -1 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-07 14:30 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Andrew Morton, Mel Gorman, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee

On Fri, Oct 07, 2016 at 02:44:15PM +0200, Vlastimil Babka wrote:
> On 10/07/2016 07:45 AM, Minchan Kim wrote:
> >There is race between page freeing and unreserved highatomic.
> >
> > CPU 0				    CPU 1
> >
> >    free_hot_cold_page
> >      mt = get_pfnblock_migratetype
> 
> so here mt == MIGRATE_HIGHATOMIC?

Yes.

> 
> >      set_pcppage_migratetype(page, mt)
> >    				    unreserve_highatomic_pageblock
> >    				    spin_lock_irqsave(&zone->lock)
> >    				    move_freepages_block
> >    				    set_pageblock_migratetype(page)
> >    				    spin_unlock_irqrestore(&zone->lock)
> >      free_pcppages_bulk
> >        __free_one_page(mt) <- mt is stale
> >
> >By above race, a page on CPU 0 could go non-highorderatomic free list
> >since the pageblock's type is changed.
> >By that, unreserve logic of
> >highorderatomic can decrease reserved count on a same pageblock
> >several times and then it will make mismatch between
> >nr_reserved_highatomic and the number of reserved pageblock.
> 
> Hmm I see.
> 
> >So, this patch verifies whether the pageblock is highatomic or not
> >and decrease the count only if the pageblock is highatomic.
> 
> Yeah I guess that's the easiest solution.
> 
> >Signed-off-by: Minchan Kim <minchan@kernel.org>
> 
> Acked-by: Vlastimil Babka <vbabka@suse.cz>

Thanks, Vlastimil.

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 2/4] mm: prevent double decrease of nr_reserved_highatomic
@ 2016-10-07 14:30       ` Minchan Kim
  0 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-07 14:30 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Andrew Morton, Mel Gorman, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee

On Fri, Oct 07, 2016 at 02:44:15PM +0200, Vlastimil Babka wrote:
> On 10/07/2016 07:45 AM, Minchan Kim wrote:
> >There is race between page freeing and unreserved highatomic.
> >
> > CPU 0				    CPU 1
> >
> >    free_hot_cold_page
> >      mt = get_pfnblock_migratetype
> 
> so here mt == MIGRATE_HIGHATOMIC?

Yes.

> 
> >      set_pcppage_migratetype(page, mt)
> >    				    unreserve_highatomic_pageblock
> >    				    spin_lock_irqsave(&zone->lock)
> >    				    move_freepages_block
> >    				    set_pageblock_migratetype(page)
> >    				    spin_unlock_irqrestore(&zone->lock)
> >      free_pcppages_bulk
> >        __free_one_page(mt) <- mt is stale
> >
> >By above race, a page on CPU 0 could go non-highorderatomic free list
> >since the pageblock's type is changed.
> >By that, unreserve logic of
> >highorderatomic can decrease reserved count on a same pageblock
> >several times and then it will make mismatch between
> >nr_reserved_highatomic and the number of reserved pageblock.
> 
> Hmm I see.
> 
> >So, this patch verifies whether the pageblock is highatomic or not
> >and decrease the count only if the pageblock is highatomic.
> 
> Yeah I guess that's the easiest solution.
> 
> >Signed-off-by: Minchan Kim <minchan@kernel.org>
> 
> Acked-by: Vlastimil Babka <vbabka@suse.cz>

Thanks, Vlastimil.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
  2016-10-07  9:09     ` Michal Hocko
@ 2016-10-07 14:43       ` Minchan Kim
  -1 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-07 14:43 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Fri, Oct 07, 2016 at 11:09:17AM +0200, Michal Hocko wrote:
> On Fri 07-10-16 14:45:35, Minchan Kim wrote:
> > After fixing the race of highatomic page count, I still encounter
> > OOM with many free memory reserved as highatomic.
> > 
> > One of reason in my testing was we unreserve free pages only if
> > reclaim has progress. Otherwise, we cannot have chance to unreseve.
> > 
> > Other problem after fixing it was it doesn't guarantee every pages
> > unreserving of highatomic pageblock because it just release *a*
> > pageblock which could have few free pages so other context could
> > steal it easily so that the process stucked with direct reclaim
> > finally can encounter OOM although there are free pages which can
> > be unreserved.
> > 
> > This patch changes the logic so that it unreserves pageblocks with
> > no_progress_loop proportionally. IOW, in first retrial of reclaim,
> > it will try to unreserve a pageblock. In second retrial of reclaim,
> > it will try to unreserve 1/MAX_RECLAIM_RETRIES * reserved_pageblock
> > and finally all reserved pageblock before the OOM.
> > 
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
> >  mm/page_alloc.c | 57 ++++++++++++++++++++++++++++++++++++++++++++-------------
> >  1 file changed, 44 insertions(+), 13 deletions(-)
> 
> This sounds much more complex then it needs to be IMHO. Why something as
> simple as thhe following wouldn't work? Please note that I even didn't
> try to compile this. It is just give you an idea.
> ---
>  mm/page_alloc.c | 26 ++++++++++++++++++++------
>  1 file changed, 20 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 73f60ad6315f..e575a4f38555 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2056,7 +2056,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
>   * intense memory pressure but failed atomic allocations should be easier
>   * to recover from than an OOM.
>   */
> -static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> +static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
> +		bool force)
>  {
>  	struct zonelist *zonelist = ac->zonelist;
>  	unsigned long flags;
> @@ -2067,8 +2068,14 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  
>  	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
>  								ac->nodemask) {
> -		/* Preserve at least one pageblock */
> -		if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
> +		if (!zone->nr_reserved_highatomic)
> +			continue;
> +
> +		/*
> +		 * Preserve at least one pageblock unless we are really running
> +		 * out of memory
> +		 */
> +		if (!force && zone->nr_reserved_highatomic <= pageblock_nr_pages)
>  			continue;
>  
>  		spin_lock_irqsave(&zone->lock, flags);
> @@ -2102,10 +2109,12 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  			set_pageblock_migratetype(page, ac->migratetype);
>  			move_freepages_block(zone, page, ac->migratetype);
>  			spin_unlock_irqrestore(&zone->lock, flags);
> -			return;
> +			return true;

Such cut-off makes reserved pageblock remained before the OOM.
We call it as premature OOM kill.

If you feel it's rather complex, simply, we can drain *every*
reserved pages when no_progress_loops is greater than MAX_RECLAIM_RETRIES.
Do you want it?

It's rather conservative approach to keep highatomic pageblocks.
Anyway, I think it's matter of policy and my goal is just to use up
every reserved pageblock before OOM so anyone never say
"Hey, enough free pages which is greater than min watermark but
why I should see the OOM with GFP_KERNEL 4K allocation".

so I'm not against on it if you guys like it. Say your preference.


>  		}
>  		spin_unlock_irqrestore(&zone->lock, flags);
>  	}
> +
> +	return false;
>  }
>  
>  /* Remove an element from the buddy allocator from the fallback list */
> @@ -3302,7 +3311,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
>  	 * Shrink them them and try again
>  	 */
>  	if (!page && !drained) {
> -		unreserve_highatomic_pageblock(ac);
> +		unreserve_highatomic_pageblock(ac, false);
>  		drain_all_pages(NULL);
>  		drained = true;
>  		goto retry;
> @@ -3418,9 +3427,14 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
>  	/*
>  	 * Make sure we converge to OOM if we cannot make any progress
>  	 * several times in the row.
> +	 * Do last desparate attempt to throw high atomic reserves away
> +	 * before we give up
>  	 */
> -	if (*no_progress_loops > MAX_RECLAIM_RETRIES)
> +	if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
> +		if (unreserve_highatomic_pageblock(ac, true))
> +			return true;
>  		return false;
> +	}
>  
>  	/*
>  	 * Keep reclaiming pages while there is a chance this will lead
> -- 
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
@ 2016-10-07 14:43       ` Minchan Kim
  0 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-07 14:43 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Fri, Oct 07, 2016 at 11:09:17AM +0200, Michal Hocko wrote:
> On Fri 07-10-16 14:45:35, Minchan Kim wrote:
> > After fixing the race of highatomic page count, I still encounter
> > OOM with many free memory reserved as highatomic.
> > 
> > One of reason in my testing was we unreserve free pages only if
> > reclaim has progress. Otherwise, we cannot have chance to unreseve.
> > 
> > Other problem after fixing it was it doesn't guarantee every pages
> > unreserving of highatomic pageblock because it just release *a*
> > pageblock which could have few free pages so other context could
> > steal it easily so that the process stucked with direct reclaim
> > finally can encounter OOM although there are free pages which can
> > be unreserved.
> > 
> > This patch changes the logic so that it unreserves pageblocks with
> > no_progress_loop proportionally. IOW, in first retrial of reclaim,
> > it will try to unreserve a pageblock. In second retrial of reclaim,
> > it will try to unreserve 1/MAX_RECLAIM_RETRIES * reserved_pageblock
> > and finally all reserved pageblock before the OOM.
> > 
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
> >  mm/page_alloc.c | 57 ++++++++++++++++++++++++++++++++++++++++++++-------------
> >  1 file changed, 44 insertions(+), 13 deletions(-)
> 
> This sounds much more complex then it needs to be IMHO. Why something as
> simple as thhe following wouldn't work? Please note that I even didn't
> try to compile this. It is just give you an idea.
> ---
>  mm/page_alloc.c | 26 ++++++++++++++++++++------
>  1 file changed, 20 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 73f60ad6315f..e575a4f38555 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2056,7 +2056,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
>   * intense memory pressure but failed atomic allocations should be easier
>   * to recover from than an OOM.
>   */
> -static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> +static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
> +		bool force)
>  {
>  	struct zonelist *zonelist = ac->zonelist;
>  	unsigned long flags;
> @@ -2067,8 +2068,14 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  
>  	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
>  								ac->nodemask) {
> -		/* Preserve at least one pageblock */
> -		if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
> +		if (!zone->nr_reserved_highatomic)
> +			continue;
> +
> +		/*
> +		 * Preserve at least one pageblock unless we are really running
> +		 * out of memory
> +		 */
> +		if (!force && zone->nr_reserved_highatomic <= pageblock_nr_pages)
>  			continue;
>  
>  		spin_lock_irqsave(&zone->lock, flags);
> @@ -2102,10 +2109,12 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  			set_pageblock_migratetype(page, ac->migratetype);
>  			move_freepages_block(zone, page, ac->migratetype);
>  			spin_unlock_irqrestore(&zone->lock, flags);
> -			return;
> +			return true;

Such cut-off makes reserved pageblock remained before the OOM.
We call it as premature OOM kill.

If you feel it's rather complex, simply, we can drain *every*
reserved pages when no_progress_loops is greater than MAX_RECLAIM_RETRIES.
Do you want it?

It's rather conservative approach to keep highatomic pageblocks.
Anyway, I think it's matter of policy and my goal is just to use up
every reserved pageblock before OOM so anyone never say
"Hey, enough free pages which is greater than min watermark but
why I should see the OOM with GFP_KERNEL 4K allocation".

so I'm not against on it if you guys like it. Say your preference.


>  		}
>  		spin_unlock_irqrestore(&zone->lock, flags);
>  	}
> +
> +	return false;
>  }
>  
>  /* Remove an element from the buddy allocator from the fallback list */
> @@ -3302,7 +3311,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
>  	 * Shrink them them and try again
>  	 */
>  	if (!page && !drained) {
> -		unreserve_highatomic_pageblock(ac);
> +		unreserve_highatomic_pageblock(ac, false);
>  		drain_all_pages(NULL);
>  		drained = true;
>  		goto retry;
> @@ -3418,9 +3427,14 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
>  	/*
>  	 * Make sure we converge to OOM if we cannot make any progress
>  	 * several times in the row.
> +	 * Do last desparate attempt to throw high atomic reserves away
> +	 * before we give up
>  	 */
> -	if (*no_progress_loops > MAX_RECLAIM_RETRIES)
> +	if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
> +		if (unreserve_highatomic_pageblock(ac, true))
> +			return true;
>  		return false;
> +	}
>  
>  	/*
>  	 * Keep reclaiming pages while there is a chance this will lead
> -- 
> Michal Hocko
> SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 0/4] use up highorder free pages before OOM
  2016-10-07  9:16   ` Michal Hocko
@ 2016-10-07 15:04     ` Minchan Kim
  -1 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-07 15:04 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Fri, Oct 07, 2016 at 11:16:26AM +0200, Michal Hocko wrote:
> On Fri 07-10-16 14:45:32, Minchan Kim wrote:
> > I got OOM report from production team with v4.4 kernel.
> > It has enough free memory but failed to allocate order-0 page and
> > finally encounter OOM kill.
> > I could reproduce it with my test easily. Look at below.
> > The reason is free pages(19M) of DMA32 zone are reserved for
> > HIGHORDERATOMIC and doesn't unreserved before the OOM.
> 
> Is this really reproducible?

I can reproduce in 1 hour.

> 
> [...]
> > active_anon:383949 inactive_anon:106724 isolated_anon:0
> >  active_file:15 inactive_file:44 isolated_file:0
> >  unevictable:0 dirty:0 writeback:24 unstable:0
> >  slab_reclaimable:2483 slab_unreclaimable:3326
> >  mapped:0 shmem:0 pagetables:1906 bounce:0
> >  free:6898 free_pcp:291 free_cma:0
> [...]
> > Free swap  = 8kB
> > Total swap = 255996kB
> > 524158 pages RAM
> > 0 pages HighMem/MovableOnly
> > 12658 pages reserved
> > 0 pages cma reserved
> > 0 pages hwpoisoned
> 
> From the above you can see that you are pretty much out of memory. There
> is basically no pagecache to reclaim and your anon memory is not 
> reclaimable either because the swap is basically full. It is true that 
> the high atomic reserves consume 19MB which could be reused but this 
> less than 1%, especially when you compare that to the amount of reserved
> memory.

I can show other log which reserve greater than 1%. See the DMA32 zone
free pages. It was GFP_ATOMIC allocation so it's different with I posted
but important thing is VM can reserve memory greater than 1% by the race
which was really what we want.

in:imklog: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
CPU: 0 PID: 476 Comm: in:imklog Tainted: G            E   4.8.0-rc7-00217-g266ef83c51e5-dirty #3135
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
 0000000000000000 ffff880077c37590 ffffffff81389033 0000000000000000
 0000000000000000 ffff880077c37618 ffffffff8117519b 0228002000000000
 ffffffffffffffff ffffffff81cedb40 0000000000000000 0000000000000040
Call Trace:
 [<ffffffff81389033>] dump_stack+0x63/0x90
 [<ffffffff8117519b>] warn_alloc_failed+0xdb/0x130
 [<ffffffff81175746>] __alloc_pages_nodemask+0x4d6/0xdb0
 [<ffffffff8120c149>] ? bdev_write_page+0xa9/0xd0
 [<ffffffff811a97b3>] ? __page_check_address+0xd3/0x130
 [<ffffffff811ba4ea>] ? deactivate_slab+0x12a/0x3e0
 [<ffffffff811b9549>] new_slab+0x339/0x490
 [<ffffffff811bad37>] ___slab_alloc.constprop.74+0x367/0x480
 [<ffffffff814601ad>] ? alloc_indirect.isra.14+0x1d/0x50
 [<ffffffff8109d0c2>] ? default_wake_function+0x12/0x20
 [<ffffffff811bae70>] __slab_alloc.constprop.73+0x20/0x40
 [<ffffffff811bb034>] __kmalloc+0x1a4/0x1e0
 [<ffffffff814601ad>] alloc_indirect.isra.14+0x1d/0x50
 [<ffffffff81460434>] virtqueue_add_sgs+0x1c4/0x470
 [<ffffffff81365075>] ? __bt_get.isra.8+0xe5/0x1c0
 [<ffffffff8150973e>] __virtblk_add_req+0xae/0x1f0
 [<ffffffff810b37d0>] ? wake_atomic_t_function+0x60/0x60
 [<ffffffff810337b9>] ? sched_clock+0x9/0x10
 [<ffffffff81360afb>] ? __blk_mq_alloc_request+0x10b/0x230
 [<ffffffff8135e293>] ? blk_rq_map_sg+0x213/0x550
 [<ffffffff81509a1d>] virtio_queue_rq+0x12d/0x290
 [<ffffffff813629c9>] __blk_mq_run_hw_queue+0x239/0x370
 [<ffffffff8136276f>] blk_mq_run_hw_queue+0x8f/0xb0
 [<ffffffff8136397c>] blk_mq_insert_requests+0x18c/0x1a0
 [<ffffffff81364865>] blk_mq_flush_plug_list+0x125/0x140
 [<ffffffff813596a7>] blk_flush_plug_list+0xc7/0x220
 [<ffffffff81359bec>] blk_finish_plug+0x2c/0x40
 [<ffffffff8117b836>] __do_page_cache_readahead+0x196/0x230
 [<ffffffffa00006ba>] ? zram_free_page+0x3a/0xb0 [zram]
 [<ffffffff8116f928>] filemap_fault+0x448/0x4f0
 [<ffffffff8119e9e4>] ? alloc_set_pte+0xe4/0x350
 [<ffffffff8125fa16>] ext4_filemap_fault+0x36/0x50
 [<ffffffff8119be35>] __do_fault+0x75/0x140
 [<ffffffff8119f6cd>] handle_mm_fault+0x84d/0xbe0
 [<ffffffff812483e4>] ? kmsg_read+0x44/0x60
 [<ffffffff8106029d>] __do_page_fault+0x1dd/0x4d0
 [<ffffffff81060653>] trace_do_page_fault+0x43/0x130
 [<ffffffff81059bda>] do_async_page_fault+0x1a/0xa0
 [<ffffffff8179dcb8>] async_page_fault+0x28/0x30
Mem-Info:
active_anon:363826 inactive_anon:121283 isolated_anon:32
 active_file:65 inactive_file:152 isolated_file:0
 unevictable:0 dirty:0 writeback:46 unstable:0
 slab_reclaimable:2778 slab_unreclaimable:3070
 mapped:112 shmem:0 pagetables:1822 bounce:0
 free:9469 free_pcp:231 free_cma:0
Node 0 active_anon:1455304kB inactive_anon:485132kB active_file:260kB inactive_file:608kB unevictable:0kB isolated(anon):128kB isolated(file):0kB mapped:448kB dirty:0kB writeback:184kB shmem:0kB writeback_tmp:0kB unstable:0kB pages_scanned:13641 all_unreclaimable? no
DMA free:7748kB min:44kB low:56kB high:68kB active_anon:7944kB inactive_anon:104kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:108kB kernel_stack:0kB pagetables:4kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
lowmem_reserve[]: 0 1952 1952 1952
DMA32 free:30128kB min:5628kB low:7624kB high:9620kB active_anon:1447360kB inactive_anon:485028kB active_file:260kB inactive_file:608kB unevictable:0kB writepending:184kB present:2080640kB managed:2030132kB mlocked:0kB slab_reclaimable:11112kB slab_unreclaimable:12172kB kernel_stack:2400kB pagetables:7284kB bounce:0kB free_pcp:924kB local_pcp:72kB free_cma:0kB
lowmem_reserve[]: 0 0 0 0
DMA: 7*4kB (UE) 3*8kB (UH) 1*16kB (M) 0*32kB 2*64kB (U) 1*128kB (M) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 1*4096kB (H) = 7748kB
DMA32: 10*4kB (H) 3*8kB (H) 47*16kB (H) 38*32kB (H) 5*64kB (H) 1*128kB (H) 2*256kB (H) 3*512kB (H) 3*1024kB (H) 3*2048kB (H) 4*4096kB (H) = 30128kB
2775 total pagecache pages
2536 pages in swap cache
Swap cache stats: add 206786828, delete 206784292, find 7323106/106686077
Free swap  = 108744kB
Total swap = 255996kB
524158 pages RAM
0 pages HighMem/MovableOnly
12648 pages reserved
0 pages cma reserved
0 pages hwpoisoned

> 
> So while I do agree that potential issues - misaccounting and others you
> are addressing in the follow up patch - are good to fix but I believe that
> draining last 19M is not something that would reliably get you over the
> edge. Your workload (93% of memory sitting on anon LRU with swap full)
> simply doesn't fit into the amount of memory you have available.

What happens if the workload fit into additional 19M memory?
I admit my testing aimed for proving the problem but with this patchset,
there is no OOM killing with many free pages and the number of OOM was
reduced highly. It is definitely better than old.

Please don't ignore 1% memory in embedded system. 20M memory in 2G system,
If we can use those for zram, it is 60~80M memory via compression.
You should know how many engineers try to reduce 1M of their driver to
cost down of the product, seriously.

> -- 
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 0/4] use up highorder free pages before OOM
@ 2016-10-07 15:04     ` Minchan Kim
  0 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-07 15:04 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Fri, Oct 07, 2016 at 11:16:26AM +0200, Michal Hocko wrote:
> On Fri 07-10-16 14:45:32, Minchan Kim wrote:
> > I got OOM report from production team with v4.4 kernel.
> > It has enough free memory but failed to allocate order-0 page and
> > finally encounter OOM kill.
> > I could reproduce it with my test easily. Look at below.
> > The reason is free pages(19M) of DMA32 zone are reserved for
> > HIGHORDERATOMIC and doesn't unreserved before the OOM.
> 
> Is this really reproducible?

I can reproduce in 1 hour.

> 
> [...]
> > active_anon:383949 inactive_anon:106724 isolated_anon:0
> >  active_file:15 inactive_file:44 isolated_file:0
> >  unevictable:0 dirty:0 writeback:24 unstable:0
> >  slab_reclaimable:2483 slab_unreclaimable:3326
> >  mapped:0 shmem:0 pagetables:1906 bounce:0
> >  free:6898 free_pcp:291 free_cma:0
> [...]
> > Free swap  = 8kB
> > Total swap = 255996kB
> > 524158 pages RAM
> > 0 pages HighMem/MovableOnly
> > 12658 pages reserved
> > 0 pages cma reserved
> > 0 pages hwpoisoned
> 
> From the above you can see that you are pretty much out of memory. There
> is basically no pagecache to reclaim and your anon memory is not 
> reclaimable either because the swap is basically full. It is true that 
> the high atomic reserves consume 19MB which could be reused but this 
> less than 1%, especially when you compare that to the amount of reserved
> memory.

I can show other log which reserve greater than 1%. See the DMA32 zone
free pages. It was GFP_ATOMIC allocation so it's different with I posted
but important thing is VM can reserve memory greater than 1% by the race
which was really what we want.

in:imklog: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
CPU: 0 PID: 476 Comm: in:imklog Tainted: G            E   4.8.0-rc7-00217-g266ef83c51e5-dirty #3135
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
 0000000000000000 ffff880077c37590 ffffffff81389033 0000000000000000
 0000000000000000 ffff880077c37618 ffffffff8117519b 0228002000000000
 ffffffffffffffff ffffffff81cedb40 0000000000000000 0000000000000040
Call Trace:
 [<ffffffff81389033>] dump_stack+0x63/0x90
 [<ffffffff8117519b>] warn_alloc_failed+0xdb/0x130
 [<ffffffff81175746>] __alloc_pages_nodemask+0x4d6/0xdb0
 [<ffffffff8120c149>] ? bdev_write_page+0xa9/0xd0
 [<ffffffff811a97b3>] ? __page_check_address+0xd3/0x130
 [<ffffffff811ba4ea>] ? deactivate_slab+0x12a/0x3e0
 [<ffffffff811b9549>] new_slab+0x339/0x490
 [<ffffffff811bad37>] ___slab_alloc.constprop.74+0x367/0x480
 [<ffffffff814601ad>] ? alloc_indirect.isra.14+0x1d/0x50
 [<ffffffff8109d0c2>] ? default_wake_function+0x12/0x20
 [<ffffffff811bae70>] __slab_alloc.constprop.73+0x20/0x40
 [<ffffffff811bb034>] __kmalloc+0x1a4/0x1e0
 [<ffffffff814601ad>] alloc_indirect.isra.14+0x1d/0x50
 [<ffffffff81460434>] virtqueue_add_sgs+0x1c4/0x470
 [<ffffffff81365075>] ? __bt_get.isra.8+0xe5/0x1c0
 [<ffffffff8150973e>] __virtblk_add_req+0xae/0x1f0
 [<ffffffff810b37d0>] ? wake_atomic_t_function+0x60/0x60
 [<ffffffff810337b9>] ? sched_clock+0x9/0x10
 [<ffffffff81360afb>] ? __blk_mq_alloc_request+0x10b/0x230
 [<ffffffff8135e293>] ? blk_rq_map_sg+0x213/0x550
 [<ffffffff81509a1d>] virtio_queue_rq+0x12d/0x290
 [<ffffffff813629c9>] __blk_mq_run_hw_queue+0x239/0x370
 [<ffffffff8136276f>] blk_mq_run_hw_queue+0x8f/0xb0
 [<ffffffff8136397c>] blk_mq_insert_requests+0x18c/0x1a0
 [<ffffffff81364865>] blk_mq_flush_plug_list+0x125/0x140
 [<ffffffff813596a7>] blk_flush_plug_list+0xc7/0x220
 [<ffffffff81359bec>] blk_finish_plug+0x2c/0x40
 [<ffffffff8117b836>] __do_page_cache_readahead+0x196/0x230
 [<ffffffffa00006ba>] ? zram_free_page+0x3a/0xb0 [zram]
 [<ffffffff8116f928>] filemap_fault+0x448/0x4f0
 [<ffffffff8119e9e4>] ? alloc_set_pte+0xe4/0x350
 [<ffffffff8125fa16>] ext4_filemap_fault+0x36/0x50
 [<ffffffff8119be35>] __do_fault+0x75/0x140
 [<ffffffff8119f6cd>] handle_mm_fault+0x84d/0xbe0
 [<ffffffff812483e4>] ? kmsg_read+0x44/0x60
 [<ffffffff8106029d>] __do_page_fault+0x1dd/0x4d0
 [<ffffffff81060653>] trace_do_page_fault+0x43/0x130
 [<ffffffff81059bda>] do_async_page_fault+0x1a/0xa0
 [<ffffffff8179dcb8>] async_page_fault+0x28/0x30
Mem-Info:
active_anon:363826 inactive_anon:121283 isolated_anon:32
 active_file:65 inactive_file:152 isolated_file:0
 unevictable:0 dirty:0 writeback:46 unstable:0
 slab_reclaimable:2778 slab_unreclaimable:3070
 mapped:112 shmem:0 pagetables:1822 bounce:0
 free:9469 free_pcp:231 free_cma:0
Node 0 active_anon:1455304kB inactive_anon:485132kB active_file:260kB inactive_file:608kB unevictable:0kB isolated(anon):128kB isolated(file):0kB mapped:448kB dirty:0kB writeback:184kB shmem:0kB writeback_tmp:0kB unstable:0kB pages_scanned:13641 all_unreclaimable? no
DMA free:7748kB min:44kB low:56kB high:68kB active_anon:7944kB inactive_anon:104kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB slab_reclaimable:0kB slab_unreclaimable:108kB kernel_stack:0kB pagetables:4kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
lowmem_reserve[]: 0 1952 1952 1952
DMA32 free:30128kB min:5628kB low:7624kB high:9620kB active_anon:1447360kB inactive_anon:485028kB active_file:260kB inactive_file:608kB unevictable:0kB writepending:184kB present:2080640kB managed:2030132kB mlocked:0kB slab_reclaimable:11112kB slab_unreclaimable:12172kB kernel_stack:2400kB pagetables:7284kB bounce:0kB free_pcp:924kB local_pcp:72kB free_cma:0kB
lowmem_reserve[]: 0 0 0 0
DMA: 7*4kB (UE) 3*8kB (UH) 1*16kB (M) 0*32kB 2*64kB (U) 1*128kB (M) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 1*4096kB (H) = 7748kB
DMA32: 10*4kB (H) 3*8kB (H) 47*16kB (H) 38*32kB (H) 5*64kB (H) 1*128kB (H) 2*256kB (H) 3*512kB (H) 3*1024kB (H) 3*2048kB (H) 4*4096kB (H) = 30128kB
2775 total pagecache pages
2536 pages in swap cache
Swap cache stats: add 206786828, delete 206784292, find 7323106/106686077
Free swap  = 108744kB
Total swap = 255996kB
524158 pages RAM
0 pages HighMem/MovableOnly
12648 pages reserved
0 pages cma reserved
0 pages hwpoisoned

> 
> So while I do agree that potential issues - misaccounting and others you
> are addressing in the follow up patch - are good to fix but I believe that
> draining last 19M is not something that would reliably get you over the
> edge. Your workload (93% of memory sitting on anon LRU with swap full)
> simply doesn't fit into the amount of memory you have available.

What happens if the workload fit into additional 19M memory?
I admit my testing aimed for proving the problem but with this patchset,
there is no OOM killing with many free pages and the number of OOM was
reduced highly. It is definitely better than old.

Please don't ignore 1% memory in embedded system. 20M memory in 2G system,
If we can use those for zram, it is 60~80M memory via compression.
You should know how many engineers try to reduce 1M of their driver to
cost down of the product, seriously.

> -- 
> Michal Hocko
> SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/4] mm: adjust reserved highatomic count
  2016-10-07 14:29       ` Minchan Kim
@ 2016-10-10  6:57         ` Vlastimil Babka
  -1 siblings, 0 replies; 56+ messages in thread
From: Vlastimil Babka @ 2016-10-10  6:57 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee

On 10/07/2016 04:29 PM, Minchan Kim wrote:
>>> In that case, we should adjust nr_reserved_highatomic.
>>> Otherwise, VM cannot reserve highorderatomic pageblocks any more
>>> although it doesn't reach 1% limit. It means highorder atomic
>>> allocation failure would be higher.
>>>
>>> So, this patch decreases the account as well as migratetype
>>> if it was MIGRATE_HIGHATOMIC.
>>>
>>> Signed-off-by: Minchan Kim <minchan@kernel.org>
>>
>> Hm wouldn't it be simpler just to prevent the pageblock's migratetype to be
>> changed if it's highatomic? Possibly also not do move_freepages_block() in
>
> It could be. Actually, I did it with modifying can_steal_fallback which returns
> false it found the pageblock is highorderatomic but changed to this way again
> because I don't have any justification to prevent changing pageblock.
> If you give concrete justification so others isn't against on it, I am happy to
> do what you suggested.

Well, MIGRATE_HIGHATOMIC is not listed in the fallbacks array at all, so 
we are not supposed to steal from it in the first place. Stealing will 
only happen due to races, which would be too costly to close, so we 
allow them and expect to be rare. But we shouldn't allow them to break 
the accounting.

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/4] mm: adjust reserved highatomic count
@ 2016-10-10  6:57         ` Vlastimil Babka
  0 siblings, 0 replies; 56+ messages in thread
From: Vlastimil Babka @ 2016-10-10  6:57 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee

On 10/07/2016 04:29 PM, Minchan Kim wrote:
>>> In that case, we should adjust nr_reserved_highatomic.
>>> Otherwise, VM cannot reserve highorderatomic pageblocks any more
>>> although it doesn't reach 1% limit. It means highorder atomic
>>> allocation failure would be higher.
>>>
>>> So, this patch decreases the account as well as migratetype
>>> if it was MIGRATE_HIGHATOMIC.
>>>
>>> Signed-off-by: Minchan Kim <minchan@kernel.org>
>>
>> Hm wouldn't it be simpler just to prevent the pageblock's migratetype to be
>> changed if it's highatomic? Possibly also not do move_freepages_block() in
>
> It could be. Actually, I did it with modifying can_steal_fallback which returns
> false it found the pageblock is highorderatomic but changed to this way again
> because I don't have any justification to prevent changing pageblock.
> If you give concrete justification so others isn't against on it, I am happy to
> do what you suggested.

Well, MIGRATE_HIGHATOMIC is not listed in the fallbacks array at all, so 
we are not supposed to steal from it in the first place. Stealing will 
only happen due to races, which would be too costly to close, so we 
allow them and expect to be rare. But we shouldn't allow them to break 
the accounting.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
  2016-10-07 14:43       ` Minchan Kim
@ 2016-10-10  7:41         ` Michal Hocko
  -1 siblings, 0 replies; 56+ messages in thread
From: Michal Hocko @ 2016-10-10  7:41 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Fri 07-10-16 23:43:45, Minchan Kim wrote:
> On Fri, Oct 07, 2016 at 11:09:17AM +0200, Michal Hocko wrote:
> > On Fri 07-10-16 14:45:35, Minchan Kim wrote:
> > > After fixing the race of highatomic page count, I still encounter
> > > OOM with many free memory reserved as highatomic.
> > > 
> > > One of reason in my testing was we unreserve free pages only if
> > > reclaim has progress. Otherwise, we cannot have chance to unreseve.
> > > 
> > > Other problem after fixing it was it doesn't guarantee every pages
> > > unreserving of highatomic pageblock because it just release *a*
> > > pageblock which could have few free pages so other context could
> > > steal it easily so that the process stucked with direct reclaim
> > > finally can encounter OOM although there are free pages which can
> > > be unreserved.
> > > 
> > > This patch changes the logic so that it unreserves pageblocks with
> > > no_progress_loop proportionally. IOW, in first retrial of reclaim,
> > > it will try to unreserve a pageblock. In second retrial of reclaim,
> > > it will try to unreserve 1/MAX_RECLAIM_RETRIES * reserved_pageblock
> > > and finally all reserved pageblock before the OOM.
> > > 
> > > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > > ---
> > >  mm/page_alloc.c | 57 ++++++++++++++++++++++++++++++++++++++++++++-------------
> > >  1 file changed, 44 insertions(+), 13 deletions(-)
> > 
> > This sounds much more complex then it needs to be IMHO. Why something as
> > simple as thhe following wouldn't work? Please note that I even didn't
> > try to compile this. It is just give you an idea.
> > ---
> >  mm/page_alloc.c | 26 ++++++++++++++++++++------
> >  1 file changed, 20 insertions(+), 6 deletions(-)
> > 
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 73f60ad6315f..e575a4f38555 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -2056,7 +2056,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
> >   * intense memory pressure but failed atomic allocations should be easier
> >   * to recover from than an OOM.
> >   */
> > -static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> > +static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
> > +		bool force)
> >  {
> >  	struct zonelist *zonelist = ac->zonelist;
> >  	unsigned long flags;
> > @@ -2067,8 +2068,14 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> >  
> >  	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
> >  								ac->nodemask) {
> > -		/* Preserve at least one pageblock */
> > -		if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
> > +		if (!zone->nr_reserved_highatomic)
> > +			continue;
> > +
> > +		/*
> > +		 * Preserve at least one pageblock unless we are really running
> > +		 * out of memory
> > +		 */
> > +		if (!force && zone->nr_reserved_highatomic <= pageblock_nr_pages)
> >  			continue;
> >  
> >  		spin_lock_irqsave(&zone->lock, flags);
> > @@ -2102,10 +2109,12 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> >  			set_pageblock_migratetype(page, ac->migratetype);
> >  			move_freepages_block(zone, page, ac->migratetype);
> >  			spin_unlock_irqrestore(&zone->lock, flags);
> > -			return;
> > +			return true;
> 
> Such cut-off makes reserved pageblock remained before the OOM.
> We call it as premature OOM kill.

Not sure I understand. The above should get rid of all atomic reserves
before we go OOM. We can do it all at once but that sounds too
aggressive to me. If we just do one at the time we have a chance to
keep some reserves if the OOM situation is really ephemeral.

Does this patch work in your usecase?

> If you feel it's rather complex, simply, we can drain *every*
> reserved pages when no_progress_loops is greater than MAX_RECLAIM_RETRIES.
> Do you want it?
> 
> It's rather conservative approach to keep highatomic pageblocks.
> Anyway, I think it's matter of policy and my goal is just to use up
> every reserved pageblock before OOM so anyone never say
> "Hey, enough free pages which is greater than min watermark but
> why I should see the OOM with GFP_KERNEL 4K allocation".

I agree that triggering OOM while we are above min watermarks is less
than optimial and we should think about how to fix it.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
@ 2016-10-10  7:41         ` Michal Hocko
  0 siblings, 0 replies; 56+ messages in thread
From: Michal Hocko @ 2016-10-10  7:41 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Fri 07-10-16 23:43:45, Minchan Kim wrote:
> On Fri, Oct 07, 2016 at 11:09:17AM +0200, Michal Hocko wrote:
> > On Fri 07-10-16 14:45:35, Minchan Kim wrote:
> > > After fixing the race of highatomic page count, I still encounter
> > > OOM with many free memory reserved as highatomic.
> > > 
> > > One of reason in my testing was we unreserve free pages only if
> > > reclaim has progress. Otherwise, we cannot have chance to unreseve.
> > > 
> > > Other problem after fixing it was it doesn't guarantee every pages
> > > unreserving of highatomic pageblock because it just release *a*
> > > pageblock which could have few free pages so other context could
> > > steal it easily so that the process stucked with direct reclaim
> > > finally can encounter OOM although there are free pages which can
> > > be unreserved.
> > > 
> > > This patch changes the logic so that it unreserves pageblocks with
> > > no_progress_loop proportionally. IOW, in first retrial of reclaim,
> > > it will try to unreserve a pageblock. In second retrial of reclaim,
> > > it will try to unreserve 1/MAX_RECLAIM_RETRIES * reserved_pageblock
> > > and finally all reserved pageblock before the OOM.
> > > 
> > > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > > ---
> > >  mm/page_alloc.c | 57 ++++++++++++++++++++++++++++++++++++++++++++-------------
> > >  1 file changed, 44 insertions(+), 13 deletions(-)
> > 
> > This sounds much more complex then it needs to be IMHO. Why something as
> > simple as thhe following wouldn't work? Please note that I even didn't
> > try to compile this. It is just give you an idea.
> > ---
> >  mm/page_alloc.c | 26 ++++++++++++++++++++------
> >  1 file changed, 20 insertions(+), 6 deletions(-)
> > 
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 73f60ad6315f..e575a4f38555 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -2056,7 +2056,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
> >   * intense memory pressure but failed atomic allocations should be easier
> >   * to recover from than an OOM.
> >   */
> > -static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> > +static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
> > +		bool force)
> >  {
> >  	struct zonelist *zonelist = ac->zonelist;
> >  	unsigned long flags;
> > @@ -2067,8 +2068,14 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> >  
> >  	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
> >  								ac->nodemask) {
> > -		/* Preserve at least one pageblock */
> > -		if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
> > +		if (!zone->nr_reserved_highatomic)
> > +			continue;
> > +
> > +		/*
> > +		 * Preserve at least one pageblock unless we are really running
> > +		 * out of memory
> > +		 */
> > +		if (!force && zone->nr_reserved_highatomic <= pageblock_nr_pages)
> >  			continue;
> >  
> >  		spin_lock_irqsave(&zone->lock, flags);
> > @@ -2102,10 +2109,12 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> >  			set_pageblock_migratetype(page, ac->migratetype);
> >  			move_freepages_block(zone, page, ac->migratetype);
> >  			spin_unlock_irqrestore(&zone->lock, flags);
> > -			return;
> > +			return true;
> 
> Such cut-off makes reserved pageblock remained before the OOM.
> We call it as premature OOM kill.

Not sure I understand. The above should get rid of all atomic reserves
before we go OOM. We can do it all at once but that sounds too
aggressive to me. If we just do one at the time we have a chance to
keep some reserves if the OOM situation is really ephemeral.

Does this patch work in your usecase?

> If you feel it's rather complex, simply, we can drain *every*
> reserved pages when no_progress_loops is greater than MAX_RECLAIM_RETRIES.
> Do you want it?
> 
> It's rather conservative approach to keep highatomic pageblocks.
> Anyway, I think it's matter of policy and my goal is just to use up
> every reserved pageblock before OOM so anyone never say
> "Hey, enough free pages which is greater than min watermark but
> why I should see the OOM with GFP_KERNEL 4K allocation".

I agree that triggering OOM while we are above min watermarks is less
than optimial and we should think about how to fix it.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 0/4] use up highorder free pages before OOM
  2016-10-07 15:04     ` Minchan Kim
@ 2016-10-10  7:47       ` Michal Hocko
  -1 siblings, 0 replies; 56+ messages in thread
From: Michal Hocko @ 2016-10-10  7:47 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Sat 08-10-16 00:04:25, Minchan Kim wrote:
[...]
> I can show other log which reserve greater than 1%. See the DMA32 zone
> free pages. It was GFP_ATOMIC allocation so it's different with I posted
> but important thing is VM can reserve memory greater than 1% by the race
> which was really what we want.
> 
> in:imklog: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[...]
> DMA: 7*4kB (UE) 3*8kB (UH) 1*16kB (M) 0*32kB 2*64kB (U) 1*128kB (M) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 1*4096kB (H) = 7748kB
> DMA32: 10*4kB (H) 3*8kB (H) 47*16kB (H) 38*32kB (H) 5*64kB (H) 1*128kB (H) 2*256kB (H) 3*512kB (H) 3*1024kB (H) 3*2048kB (H) 4*4096kB (H) = 30128kB

Yes, this sounds like a bug. Please add this information to the patch
which aims to fix the misaccounting.

> > So while I do agree that potential issues - misaccounting and others you
> > are addressing in the follow up patch - are good to fix but I believe that
> > draining last 19M is not something that would reliably get you over the
> > edge. Your workload (93% of memory sitting on anon LRU with swap full)
> > simply doesn't fit into the amount of memory you have available.
> 
> What happens if the workload fit into additional 19M memory?
> I admit my testing aimed for proving the problem but with this patchset,
> there is no OOM killing with many free pages and the number of OOM was
> reduced highly. It is definitely better than old.
> 
> Please don't ignore 1% memory in embedded system. 20M memory in 2G system,
> If we can use those for zram, it is 60~80M memory via compression.
> You should know how many engineers try to reduce 1M of their driver to
> cost down of the product, seriously.

I am definitely not ignoring neither embedded systems nor 1% of the
memory that might really matter. I just wanted to point out that being
that close to OOM usually blows up later or starts trashing very soon.
It is true that a particular workload might benefit from ever last
allocatable page in the system but it would be better to mention all
that in the changelog.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 0/4] use up highorder free pages before OOM
@ 2016-10-10  7:47       ` Michal Hocko
  0 siblings, 0 replies; 56+ messages in thread
From: Michal Hocko @ 2016-10-10  7:47 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Sat 08-10-16 00:04:25, Minchan Kim wrote:
[...]
> I can show other log which reserve greater than 1%. See the DMA32 zone
> free pages. It was GFP_ATOMIC allocation so it's different with I posted
> but important thing is VM can reserve memory greater than 1% by the race
> which was really what we want.
> 
> in:imklog: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
[...]
> DMA: 7*4kB (UE) 3*8kB (UH) 1*16kB (M) 0*32kB 2*64kB (U) 1*128kB (M) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 1*4096kB (H) = 7748kB
> DMA32: 10*4kB (H) 3*8kB (H) 47*16kB (H) 38*32kB (H) 5*64kB (H) 1*128kB (H) 2*256kB (H) 3*512kB (H) 3*1024kB (H) 3*2048kB (H) 4*4096kB (H) = 30128kB

Yes, this sounds like a bug. Please add this information to the patch
which aims to fix the misaccounting.

> > So while I do agree that potential issues - misaccounting and others you
> > are addressing in the follow up patch - are good to fix but I believe that
> > draining last 19M is not something that would reliably get you over the
> > edge. Your workload (93% of memory sitting on anon LRU with swap full)
> > simply doesn't fit into the amount of memory you have available.
> 
> What happens if the workload fit into additional 19M memory?
> I admit my testing aimed for proving the problem but with this patchset,
> there is no OOM killing with many free pages and the number of OOM was
> reduced highly. It is definitely better than old.
> 
> Please don't ignore 1% memory in embedded system. 20M memory in 2G system,
> If we can use those for zram, it is 60~80M memory via compression.
> You should know how many engineers try to reduce 1M of their driver to
> cost down of the product, seriously.

I am definitely not ignoring neither embedded systems nor 1% of the
memory that might really matter. I just wanted to point out that being
that close to OOM usually blows up later or starts trashing very soon.
It is true that a particular workload might benefit from ever last
allocatable page in the system but it would be better to mention all
that in the changelog.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/4] mm: adjust reserved highatomic count
  2016-10-10  6:57         ` Vlastimil Babka
@ 2016-10-11  4:19           ` Minchan Kim
  -1 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-11  4:19 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Andrew Morton, Mel Gorman, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee

Hi Vlasimil,

On Mon, Oct 10, 2016 at 08:57:40AM +0200, Vlastimil Babka wrote:
> On 10/07/2016 04:29 PM, Minchan Kim wrote:
> >>>In that case, we should adjust nr_reserved_highatomic.
> >>>Otherwise, VM cannot reserve highorderatomic pageblocks any more
> >>>although it doesn't reach 1% limit. It means highorder atomic
> >>>allocation failure would be higher.
> >>>
> >>>So, this patch decreases the account as well as migratetype
> >>>if it was MIGRATE_HIGHATOMIC.
> >>>
> >>>Signed-off-by: Minchan Kim <minchan@kernel.org>
> >>
> >>Hm wouldn't it be simpler just to prevent the pageblock's migratetype to be
> >>changed if it's highatomic? Possibly also not do move_freepages_block() in
> >
> >It could be. Actually, I did it with modifying can_steal_fallback which returns
> >false it found the pageblock is highorderatomic but changed to this way again
> >because I don't have any justification to prevent changing pageblock.
> >If you give concrete justification so others isn't against on it, I am happy to
> >do what you suggested.
> 
> Well, MIGRATE_HIGHATOMIC is not listed in the fallbacks array at all, so we
> are not supposed to steal from it in the first place. Stealing will only
> happen due to races, which would be too costly to close, so we allow them
> and expect to be rare. But we shouldn't allow them to break the accounting.
> 

Fair enough.
How about this?

>From 4a0b6a74ebf1af7f90720b0028da49e2e2a2b679 Mon Sep 17 00:00:00 2001
From: Minchan Kim <minchan@kernel.org>
Date: Thu, 6 Oct 2016 13:38:35 +0900
Subject: [PATCH] mm: don't steal highatomic pageblock

In page freeing path, migratetype is racy so that a highorderatomic
page could free into non-highorderatomic free list. If that page
is allocated, VM can change the pageblock from higorderatomic to
something. In that case, highatomic pageblock accounting is broken
so it doesn't work(e.g., VM cannot reserve highorderatomic pageblocks
any more although it doesn't reach 1% limit).

So, this patch prohibits the changing from highatomic to other type.
It's no problem because MIGRATE_HIGHATOMIC is not listed in fallback
array so stealing will only happen due to unexpected races which is
really rare. Also, such prohibiting keeps highatomic pageblock more
longer so it would be better for highorderatomic page allocation.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/page_alloc.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 55ad0229ebf3..79853b258211 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2154,7 +2154,8 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
 
 		page = list_first_entry(&area->free_list[fallback_mt],
 						struct page, lru);
-		if (can_steal)
+		if (can_steal &&
+			get_pageblock_migratetype(page) != MIGRATE_HIGHATOMIC)
 			steal_suitable_fallback(zone, page, start_migratetype);
 
 		/* Remove the page from the freelists */
@@ -2555,7 +2556,8 @@ int __isolate_free_page(struct page *page, unsigned int order)
 		struct page *endpage = page + (1 << order) - 1;
 		for (; page < endpage; page += pageblock_nr_pages) {
 			int mt = get_pageblock_migratetype(page);
-			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt))
+			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt)
+				&& mt != MIGRATE_HIGHATOMIC)
 				set_pageblock_migratetype(page,
 							  MIGRATE_MOVABLE);
 		}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/4] mm: adjust reserved highatomic count
@ 2016-10-11  4:19           ` Minchan Kim
  0 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-11  4:19 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Andrew Morton, Mel Gorman, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee

Hi Vlasimil,

On Mon, Oct 10, 2016 at 08:57:40AM +0200, Vlastimil Babka wrote:
> On 10/07/2016 04:29 PM, Minchan Kim wrote:
> >>>In that case, we should adjust nr_reserved_highatomic.
> >>>Otherwise, VM cannot reserve highorderatomic pageblocks any more
> >>>although it doesn't reach 1% limit. It means highorder atomic
> >>>allocation failure would be higher.
> >>>
> >>>So, this patch decreases the account as well as migratetype
> >>>if it was MIGRATE_HIGHATOMIC.
> >>>
> >>>Signed-off-by: Minchan Kim <minchan@kernel.org>
> >>
> >>Hm wouldn't it be simpler just to prevent the pageblock's migratetype to be
> >>changed if it's highatomic? Possibly also not do move_freepages_block() in
> >
> >It could be. Actually, I did it with modifying can_steal_fallback which returns
> >false it found the pageblock is highorderatomic but changed to this way again
> >because I don't have any justification to prevent changing pageblock.
> >If you give concrete justification so others isn't against on it, I am happy to
> >do what you suggested.
> 
> Well, MIGRATE_HIGHATOMIC is not listed in the fallbacks array at all, so we
> are not supposed to steal from it in the first place. Stealing will only
> happen due to races, which would be too costly to close, so we allow them
> and expect to be rare. But we shouldn't allow them to break the accounting.
> 

Fair enough.
How about this?

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
  2016-10-10  7:41         ` Michal Hocko
@ 2016-10-11  5:01           ` Minchan Kim
  -1 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-11  5:01 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

Hi Michal,

On Mon, Oct 10, 2016 at 09:41:40AM +0200, Michal Hocko wrote:
> On Fri 07-10-16 23:43:45, Minchan Kim wrote:
> > On Fri, Oct 07, 2016 at 11:09:17AM +0200, Michal Hocko wrote:
> > > On Fri 07-10-16 14:45:35, Minchan Kim wrote:
> > > > After fixing the race of highatomic page count, I still encounter
> > > > OOM with many free memory reserved as highatomic.
> > > > 
> > > > One of reason in my testing was we unreserve free pages only if
> > > > reclaim has progress. Otherwise, we cannot have chance to unreseve.
> > > > 
> > > > Other problem after fixing it was it doesn't guarantee every pages
> > > > unreserving of highatomic pageblock because it just release *a*
> > > > pageblock which could have few free pages so other context could
> > > > steal it easily so that the process stucked with direct reclaim
> > > > finally can encounter OOM although there are free pages which can
> > > > be unreserved.
> > > > 
> > > > This patch changes the logic so that it unreserves pageblocks with
> > > > no_progress_loop proportionally. IOW, in first retrial of reclaim,
> > > > it will try to unreserve a pageblock. In second retrial of reclaim,
> > > > it will try to unreserve 1/MAX_RECLAIM_RETRIES * reserved_pageblock
> > > > and finally all reserved pageblock before the OOM.
> > > > 
> > > > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > > > ---
> > > >  mm/page_alloc.c | 57 ++++++++++++++++++++++++++++++++++++++++++++-------------
> > > >  1 file changed, 44 insertions(+), 13 deletions(-)
> > > 
> > > This sounds much more complex then it needs to be IMHO. Why something as
> > > simple as thhe following wouldn't work? Please note that I even didn't
> > > try to compile this. It is just give you an idea.
> > > ---
> > >  mm/page_alloc.c | 26 ++++++++++++++++++++------
> > >  1 file changed, 20 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index 73f60ad6315f..e575a4f38555 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -2056,7 +2056,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
> > >   * intense memory pressure but failed atomic allocations should be easier
> > >   * to recover from than an OOM.
> > >   */
> > > -static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> > > +static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
> > > +		bool force)
> > >  {
> > >  	struct zonelist *zonelist = ac->zonelist;
> > >  	unsigned long flags;
> > > @@ -2067,8 +2068,14 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> > >  
> > >  	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
> > >  								ac->nodemask) {
> > > -		/* Preserve at least one pageblock */
> > > -		if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
> > > +		if (!zone->nr_reserved_highatomic)
> > > +			continue;
> > > +
> > > +		/*
> > > +		 * Preserve at least one pageblock unless we are really running
> > > +		 * out of memory
> > > +		 */
> > > +		if (!force && zone->nr_reserved_highatomic <= pageblock_nr_pages)
> > >  			continue;
> > >  
> > >  		spin_lock_irqsave(&zone->lock, flags);
> > > @@ -2102,10 +2109,12 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> > >  			set_pageblock_migratetype(page, ac->migratetype);
> > >  			move_freepages_block(zone, page, ac->migratetype);
> > >  			spin_unlock_irqrestore(&zone->lock, flags);
> > > -			return;
> > > +			return true;
> > 
> > Such cut-off makes reserved pageblock remained before the OOM.
> > We call it as premature OOM kill.
> 
> Not sure I understand. The above should get rid of all atomic reserves
> before we go OOM. We can do it all at once but that sounds too

The problem is there is race between page freeing path and unreserve
logic so that some pages could be in highatomic free list even though
zone->nr_reserved_highatomic is already zero.
So, at least, it would be better to have a draining step at some point
where was (no_progress_loops == MAX_RECLAIM RETRIES) in my patch.

Also, your patch makes retry loop greater than MAX_RECLAIM_RETRIES
if unreserve_highatomic_pageblock returns true. Theoretically,
it would make live lock. You might argue it's *really really* rare
but I don't want to add such subtle thing.
Maybe, we could drain when no_progress_loops == MAX_RECLAIM_RETRIES.

> aggressive to me. If we just do one at the time we have a chance to
> keep some reserves if the OOM situation is really ephemeral.
> 
> Does this patch work in your usecase?

I didn't test but I guess it works but it has problems I mentioned
above. 

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
@ 2016-10-11  5:01           ` Minchan Kim
  0 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-11  5:01 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

Hi Michal,

On Mon, Oct 10, 2016 at 09:41:40AM +0200, Michal Hocko wrote:
> On Fri 07-10-16 23:43:45, Minchan Kim wrote:
> > On Fri, Oct 07, 2016 at 11:09:17AM +0200, Michal Hocko wrote:
> > > On Fri 07-10-16 14:45:35, Minchan Kim wrote:
> > > > After fixing the race of highatomic page count, I still encounter
> > > > OOM with many free memory reserved as highatomic.
> > > > 
> > > > One of reason in my testing was we unreserve free pages only if
> > > > reclaim has progress. Otherwise, we cannot have chance to unreseve.
> > > > 
> > > > Other problem after fixing it was it doesn't guarantee every pages
> > > > unreserving of highatomic pageblock because it just release *a*
> > > > pageblock which could have few free pages so other context could
> > > > steal it easily so that the process stucked with direct reclaim
> > > > finally can encounter OOM although there are free pages which can
> > > > be unreserved.
> > > > 
> > > > This patch changes the logic so that it unreserves pageblocks with
> > > > no_progress_loop proportionally. IOW, in first retrial of reclaim,
> > > > it will try to unreserve a pageblock. In second retrial of reclaim,
> > > > it will try to unreserve 1/MAX_RECLAIM_RETRIES * reserved_pageblock
> > > > and finally all reserved pageblock before the OOM.
> > > > 
> > > > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > > > ---
> > > >  mm/page_alloc.c | 57 ++++++++++++++++++++++++++++++++++++++++++++-------------
> > > >  1 file changed, 44 insertions(+), 13 deletions(-)
> > > 
> > > This sounds much more complex then it needs to be IMHO. Why something as
> > > simple as thhe following wouldn't work? Please note that I even didn't
> > > try to compile this. It is just give you an idea.
> > > ---
> > >  mm/page_alloc.c | 26 ++++++++++++++++++++------
> > >  1 file changed, 20 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index 73f60ad6315f..e575a4f38555 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -2056,7 +2056,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
> > >   * intense memory pressure but failed atomic allocations should be easier
> > >   * to recover from than an OOM.
> > >   */
> > > -static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> > > +static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
> > > +		bool force)
> > >  {
> > >  	struct zonelist *zonelist = ac->zonelist;
> > >  	unsigned long flags;
> > > @@ -2067,8 +2068,14 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> > >  
> > >  	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
> > >  								ac->nodemask) {
> > > -		/* Preserve at least one pageblock */
> > > -		if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
> > > +		if (!zone->nr_reserved_highatomic)
> > > +			continue;
> > > +
> > > +		/*
> > > +		 * Preserve at least one pageblock unless we are really running
> > > +		 * out of memory
> > > +		 */
> > > +		if (!force && zone->nr_reserved_highatomic <= pageblock_nr_pages)
> > >  			continue;
> > >  
> > >  		spin_lock_irqsave(&zone->lock, flags);
> > > @@ -2102,10 +2109,12 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> > >  			set_pageblock_migratetype(page, ac->migratetype);
> > >  			move_freepages_block(zone, page, ac->migratetype);
> > >  			spin_unlock_irqrestore(&zone->lock, flags);
> > > -			return;
> > > +			return true;
> > 
> > Such cut-off makes reserved pageblock remained before the OOM.
> > We call it as premature OOM kill.
> 
> Not sure I understand. The above should get rid of all atomic reserves
> before we go OOM. We can do it all at once but that sounds too

The problem is there is race between page freeing path and unreserve
logic so that some pages could be in highatomic free list even though
zone->nr_reserved_highatomic is already zero.
So, at least, it would be better to have a draining step at some point
where was (no_progress_loops == MAX_RECLAIM RETRIES) in my patch.

Also, your patch makes retry loop greater than MAX_RECLAIM_RETRIES
if unreserve_highatomic_pageblock returns true. Theoretically,
it would make live lock. You might argue it's *really really* rare
but I don't want to add such subtle thing.
Maybe, we could drain when no_progress_loops == MAX_RECLAIM_RETRIES.

> aggressive to me. If we just do one at the time we have a chance to
> keep some reserves if the OOM situation is really ephemeral.
> 
> Does this patch work in your usecase?

I didn't test but I guess it works but it has problems I mentioned
above. 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 0/4] use up highorder free pages before OOM
  2016-10-10  7:47       ` Michal Hocko
@ 2016-10-11  5:06         ` Minchan Kim
  -1 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-11  5:06 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Mon, Oct 10, 2016 at 09:47:31AM +0200, Michal Hocko wrote:
> On Sat 08-10-16 00:04:25, Minchan Kim wrote:
> [...]
> > I can show other log which reserve greater than 1%. See the DMA32 zone
> > free pages. It was GFP_ATOMIC allocation so it's different with I posted
> > but important thing is VM can reserve memory greater than 1% by the race
> > which was really what we want.
> > 
> > in:imklog: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
> [...]
> > DMA: 7*4kB (UE) 3*8kB (UH) 1*16kB (M) 0*32kB 2*64kB (U) 1*128kB (M) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 1*4096kB (H) = 7748kB
> > DMA32: 10*4kB (H) 3*8kB (H) 47*16kB (H) 38*32kB (H) 5*64kB (H) 1*128kB (H) 2*256kB (H) 3*512kB (H) 3*1024kB (H) 3*2048kB (H) 4*4096kB (H) = 30128kB
> 
> Yes, this sounds like a bug. Please add this information to the patch
> which aims to fix the misaccounting.

No problem.

> 
> > > So while I do agree that potential issues - misaccounting and others you
> > > are addressing in the follow up patch - are good to fix but I believe that
> > > draining last 19M is not something that would reliably get you over the
> > > edge. Your workload (93% of memory sitting on anon LRU with swap full)
> > > simply doesn't fit into the amount of memory you have available.
> > 
> > What happens if the workload fit into additional 19M memory?
> > I admit my testing aimed for proving the problem but with this patchset,
> > there is no OOM killing with many free pages and the number of OOM was
> > reduced highly. It is definitely better than old.
> > 
> > Please don't ignore 1% memory in embedded system. 20M memory in 2G system,
> > If we can use those for zram, it is 60~80M memory via compression.
> > You should know how many engineers try to reduce 1M of their driver to
> > cost down of the product, seriously.
> 
> I am definitely not ignoring neither embedded systems nor 1% of the
> memory that might really matter. I just wanted to point out that being

Whew and I thought you were serious.

> that close to OOM usually blows up later or starts trashing very soon.
> It is true that a particular workload might benefit from ever last
> allocatable page in the system but it would be better to mention all
> that in the changelog.

I don't unerstand what phrase you really want to include the changelog.
I will add the information which isolate 30M free pages before 4K page
allocation failure in next version. If you want something to add,
please say again.

Thanks for the review, Michal.

> -- 
> Michal Hocko
> SUSE Labs
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 0/4] use up highorder free pages before OOM
@ 2016-10-11  5:06         ` Minchan Kim
  0 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-11  5:06 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Mon, Oct 10, 2016 at 09:47:31AM +0200, Michal Hocko wrote:
> On Sat 08-10-16 00:04:25, Minchan Kim wrote:
> [...]
> > I can show other log which reserve greater than 1%. See the DMA32 zone
> > free pages. It was GFP_ATOMIC allocation so it's different with I posted
> > but important thing is VM can reserve memory greater than 1% by the race
> > which was really what we want.
> > 
> > in:imklog: page allocation failure: order:0, mode:0x2280020(GFP_ATOMIC|__GFP_NOTRACK)
> [...]
> > DMA: 7*4kB (UE) 3*8kB (UH) 1*16kB (M) 0*32kB 2*64kB (U) 1*128kB (M) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 1*4096kB (H) = 7748kB
> > DMA32: 10*4kB (H) 3*8kB (H) 47*16kB (H) 38*32kB (H) 5*64kB (H) 1*128kB (H) 2*256kB (H) 3*512kB (H) 3*1024kB (H) 3*2048kB (H) 4*4096kB (H) = 30128kB
> 
> Yes, this sounds like a bug. Please add this information to the patch
> which aims to fix the misaccounting.

No problem.

> 
> > > So while I do agree that potential issues - misaccounting and others you
> > > are addressing in the follow up patch - are good to fix but I believe that
> > > draining last 19M is not something that would reliably get you over the
> > > edge. Your workload (93% of memory sitting on anon LRU with swap full)
> > > simply doesn't fit into the amount of memory you have available.
> > 
> > What happens if the workload fit into additional 19M memory?
> > I admit my testing aimed for proving the problem but with this patchset,
> > there is no OOM killing with many free pages and the number of OOM was
> > reduced highly. It is definitely better than old.
> > 
> > Please don't ignore 1% memory in embedded system. 20M memory in 2G system,
> > If we can use those for zram, it is 60~80M memory via compression.
> > You should know how many engineers try to reduce 1M of their driver to
> > cost down of the product, seriously.
> 
> I am definitely not ignoring neither embedded systems nor 1% of the
> memory that might really matter. I just wanted to point out that being

Whew and I thought you were serious.

> that close to OOM usually blows up later or starts trashing very soon.
> It is true that a particular workload might benefit from ever last
> allocatable page in the system but it would be better to mention all
> that in the changelog.

I don't unerstand what phrase you really want to include the changelog.
I will add the information which isolate 30M free pages before 4K page
allocation failure in next version. If you want something to add,
please say again.

Thanks for the review, Michal.

> -- 
> Michal Hocko
> SUSE Labs
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
  2016-10-11  5:01           ` Minchan Kim
@ 2016-10-11  6:50             ` Michal Hocko
  -1 siblings, 0 replies; 56+ messages in thread
From: Michal Hocko @ 2016-10-11  6:50 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Tue 11-10-16 14:01:41, Minchan Kim wrote:
> Hi Michal,
> 
> On Mon, Oct 10, 2016 at 09:41:40AM +0200, Michal Hocko wrote:
> > On Fri 07-10-16 23:43:45, Minchan Kim wrote:
> > > On Fri, Oct 07, 2016 at 11:09:17AM +0200, Michal Hocko wrote:
[...]
> > > > @@ -2102,10 +2109,12 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> > > >  			set_pageblock_migratetype(page, ac->migratetype);
> > > >  			move_freepages_block(zone, page, ac->migratetype);
> > > >  			spin_unlock_irqrestore(&zone->lock, flags);
> > > > -			return;
> > > > +			return true;
> > > 
> > > Such cut-off makes reserved pageblock remained before the OOM.
> > > We call it as premature OOM kill.
> > 
> > Not sure I understand. The above should get rid of all atomic reserves
> > before we go OOM. We can do it all at once but that sounds too
> 
> The problem is there is race between page freeing path and unreserve
> logic so that some pages could be in highatomic free list even though
> zone->nr_reserved_highatomic is already zero.

Does it make any sense to handle such an unlikely case?

> So, at least, it would be better to have a draining step at some point
> where was (no_progress_loops == MAX_RECLAIM RETRIES) in my patch.
> 
> Also, your patch makes retry loop greater than MAX_RECLAIM_RETRIES
> if unreserve_highatomic_pageblock returns true. Theoretically,
> it would make live lock. You might argue it's *really really* rare
> but I don't want to add such subtle thing.
> Maybe, we could drain when no_progress_loops == MAX_RECLAIM_RETRIES.

What would be the scenario when we would really livelock here? How can
we have unreserve_highatomic_pageblock returning true for ever?

> > aggressive to me. If we just do one at the time we have a chance to
> > keep some reserves if the OOM situation is really ephemeral.
> > 
> > Does this patch work in your usecase?
> 
> I didn't test but I guess it works but it has problems I mentioned
> above. 

Please do not make this too over complicated and be practical. I do not
really want to dismiss your usecase but I am really not convinced that
such a "perfectly fit into all memory" situations are sustainable and
justify to make the whole code more complex. I agree that we can at
least try to do something to release those reserves but let's do it
as simple as possible.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
@ 2016-10-11  6:50             ` Michal Hocko
  0 siblings, 0 replies; 56+ messages in thread
From: Michal Hocko @ 2016-10-11  6:50 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Tue 11-10-16 14:01:41, Minchan Kim wrote:
> Hi Michal,
> 
> On Mon, Oct 10, 2016 at 09:41:40AM +0200, Michal Hocko wrote:
> > On Fri 07-10-16 23:43:45, Minchan Kim wrote:
> > > On Fri, Oct 07, 2016 at 11:09:17AM +0200, Michal Hocko wrote:
[...]
> > > > @@ -2102,10 +2109,12 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> > > >  			set_pageblock_migratetype(page, ac->migratetype);
> > > >  			move_freepages_block(zone, page, ac->migratetype);
> > > >  			spin_unlock_irqrestore(&zone->lock, flags);
> > > > -			return;
> > > > +			return true;
> > > 
> > > Such cut-off makes reserved pageblock remained before the OOM.
> > > We call it as premature OOM kill.
> > 
> > Not sure I understand. The above should get rid of all atomic reserves
> > before we go OOM. We can do it all at once but that sounds too
> 
> The problem is there is race between page freeing path and unreserve
> logic so that some pages could be in highatomic free list even though
> zone->nr_reserved_highatomic is already zero.

Does it make any sense to handle such an unlikely case?

> So, at least, it would be better to have a draining step at some point
> where was (no_progress_loops == MAX_RECLAIM RETRIES) in my patch.
> 
> Also, your patch makes retry loop greater than MAX_RECLAIM_RETRIES
> if unreserve_highatomic_pageblock returns true. Theoretically,
> it would make live lock. You might argue it's *really really* rare
> but I don't want to add such subtle thing.
> Maybe, we could drain when no_progress_loops == MAX_RECLAIM_RETRIES.

What would be the scenario when we would really livelock here? How can
we have unreserve_highatomic_pageblock returning true for ever?

> > aggressive to me. If we just do one at the time we have a chance to
> > keep some reserves if the OOM situation is really ephemeral.
> > 
> > Does this patch work in your usecase?
> 
> I didn't test but I guess it works but it has problems I mentioned
> above. 

Please do not make this too over complicated and be practical. I do not
really want to dismiss your usecase but I am really not convinced that
such a "perfectly fit into all memory" situations are sustainable and
justify to make the whole code more complex. I agree that we can at
least try to do something to release those reserves but let's do it
as simple as possible.

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 0/4] use up highorder free pages before OOM
  2016-10-11  5:06         ` Minchan Kim
@ 2016-10-11  6:53           ` Michal Hocko
  -1 siblings, 0 replies; 56+ messages in thread
From: Michal Hocko @ 2016-10-11  6:53 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Tue 11-10-16 14:06:43, Minchan Kim wrote:
> On Mon, Oct 10, 2016 at 09:47:31AM +0200, Michal Hocko wrote:
[...]
> > that close to OOM usually blows up later or starts trashing very soon.
> > It is true that a particular workload might benefit from ever last
> > allocatable page in the system but it would be better to mention all
> > that in the changelog.
> 
> I don't unerstand what phrase you really want to include the changelog.
> I will add the information which isolate 30M free pages before 4K page
> allocation failure in next version. If you want something to add,
> please say again.

Describe your usecase where the additional 1% of memory can allow a
sustainable workload without OOM. This is not usually the case as I've
tried to explain but it is true that the compression might change the
picture somehow. If your testcase is artificial, try to explain how it
emulates a real workload etc...
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 0/4] use up highorder free pages before OOM
@ 2016-10-11  6:53           ` Michal Hocko
  0 siblings, 0 replies; 56+ messages in thread
From: Michal Hocko @ 2016-10-11  6:53 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Tue 11-10-16 14:06:43, Minchan Kim wrote:
> On Mon, Oct 10, 2016 at 09:47:31AM +0200, Michal Hocko wrote:
[...]
> > that close to OOM usually blows up later or starts trashing very soon.
> > It is true that a particular workload might benefit from ever last
> > allocatable page in the system but it would be better to mention all
> > that in the changelog.
> 
> I don't unerstand what phrase you really want to include the changelog.
> I will add the information which isolate 30M free pages before 4K page
> allocation failure in next version. If you want something to add,
> please say again.

Describe your usecase where the additional 1% of memory can allow a
sustainable workload without OOM. This is not usually the case as I've
tried to explain but it is true that the compression might change the
picture somehow. If your testcase is artificial, try to explain how it
emulates a real workload etc...
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
  2016-10-11  6:50             ` Michal Hocko
@ 2016-10-11  7:09               ` Minchan Kim
  -1 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-11  7:09 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Tue, Oct 11, 2016 at 08:50:48AM +0200, Michal Hocko wrote:
> On Tue 11-10-16 14:01:41, Minchan Kim wrote:
> > Hi Michal,
> > 
> > On Mon, Oct 10, 2016 at 09:41:40AM +0200, Michal Hocko wrote:
> > > On Fri 07-10-16 23:43:45, Minchan Kim wrote:
> > > > On Fri, Oct 07, 2016 at 11:09:17AM +0200, Michal Hocko wrote:
> [...]
> > > > > @@ -2102,10 +2109,12 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> > > > >  			set_pageblock_migratetype(page, ac->migratetype);
> > > > >  			move_freepages_block(zone, page, ac->migratetype);
> > > > >  			spin_unlock_irqrestore(&zone->lock, flags);
> > > > > -			return;
> > > > > +			return true;
> > > > 
> > > > Such cut-off makes reserved pageblock remained before the OOM.
> > > > We call it as premature OOM kill.
> > > 
> > > Not sure I understand. The above should get rid of all atomic reserves
> > > before we go OOM. We can do it all at once but that sounds too
> > 
> > The problem is there is race between page freeing path and unreserve
> > logic so that some pages could be in highatomic free list even though
> > zone->nr_reserved_highatomic is already zero.
> 
> Does it make any sense to handle such an unlikely case?

I agree if it's really hard to solve but why should we remain
such hole in the algorithm if we can fix easily?

> 
> > So, at least, it would be better to have a draining step at some point
> > where was (no_progress_loops == MAX_RECLAIM RETRIES) in my patch.
> > 
> > Also, your patch makes retry loop greater than MAX_RECLAIM_RETRIES
> > if unreserve_highatomic_pageblock returns true. Theoretically,
> > it would make live lock. You might argue it's *really really* rare
> > but I don't want to add such subtle thing.
> > Maybe, we could drain when no_progress_loops == MAX_RECLAIM_RETRIES.
> 
> What would be the scenario when we would really livelock here? How can
> we have unreserve_highatomic_pageblock returning true for ever?

Other context freeing highorder page/reallocating repeatedly while
a process stucked direct reclaim is looping with should_reclaim_retry.

> 
> > > aggressive to me. If we just do one at the time we have a chance to
> > > keep some reserves if the OOM situation is really ephemeral.
> > > 
> > > Does this patch work in your usecase?
> > 
> > I didn't test but I guess it works but it has problems I mentioned
> > above. 
> 
> Please do not make this too over complicated and be practical. I do not
> really want to dismiss your usecase but I am really not convinced that
> such a "perfectly fit into all memory" situations are sustainable and
> justify to make the whole code more complex. I agree that we can at
> least try to do something to release those reserves but let's do it
> as simple as possible.

If you think it's too complicated, how about this?

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index fd91b8955b26..e3ce442e9976 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2098,7 +2098,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
  * intense memory pressure but failed atomic allocations should be easier
  * to recover from than an OOM.
  */
-static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
+static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
+						bool drain)
 {
 	struct zonelist *zonelist = ac->zonelist;
 	unsigned long flags;
@@ -2106,11 +2107,12 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 	struct zone *zone;
 	struct page *page;
 	int order;
+	bool ret = false;
 
 	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
 								ac->nodemask) {
 		/* Preserve at least one pageblock */
-		if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
+		if (!drain && zone->nr_reserved_highatomic <= pageblock_nr_pages)
 			continue;
 
 		spin_lock_irqsave(&zone->lock, flags);
@@ -2154,12 +2156,24 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 			 * may increase.
 			 */
 			set_pageblock_migratetype(page, ac->migratetype);
-			move_freepages_block(zone, page, ac->migratetype);
-			spin_unlock_irqrestore(&zone->lock, flags);
-			return;
+			ret = move_freepages_block(zone, page,
+						ac->migratetype);
+			/*
+			 * By race with page freeing functions, !highatomic
+			 * pageblocks can have free pages in highatomic free
+			 * list so if drain is true, try to unreserve every
+			 * free pages in highatomic free list without bailing
+			 * out.
+			 */
+			if (!drain) {
+				spin_unlock_irqrestore(&zone->lock, flags);
+				return ret;
+			}
 		}
 		spin_unlock_irqrestore(&zone->lock, flags);
 	}
+
+	return ret;
 }
 
 /* Remove an element from the buddy allocator from the fallback list */
@@ -3358,7 +3372,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
 	 * Shrink them them and try again
 	 */
 	if (!page && !drained) {
-		unreserve_highatomic_pageblock(ac);
+		unreserve_highatomic_pageblock(ac, false);
 		drain_all_pages(NULL);
 		drained = true;
 		goto retry;
@@ -3475,8 +3489,11 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 	 * Make sure we converge to OOM if we cannot make any progress
 	 * several times in the row.
 	 */
-	if (*no_progress_loops > MAX_RECLAIM_RETRIES)
+	if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
+		if (unreserve_highatomic_pageblock(ac, true))
+			return true;
 		return false;
+	}
 
 	/*
 	 * Keep reclaiming pages while there is a chance this will lead
> 
> -- 
> Michal Hocko
> SUSE Labs
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
@ 2016-10-11  7:09               ` Minchan Kim
  0 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-11  7:09 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Tue, Oct 11, 2016 at 08:50:48AM +0200, Michal Hocko wrote:
> On Tue 11-10-16 14:01:41, Minchan Kim wrote:
> > Hi Michal,
> > 
> > On Mon, Oct 10, 2016 at 09:41:40AM +0200, Michal Hocko wrote:
> > > On Fri 07-10-16 23:43:45, Minchan Kim wrote:
> > > > On Fri, Oct 07, 2016 at 11:09:17AM +0200, Michal Hocko wrote:
> [...]
> > > > > @@ -2102,10 +2109,12 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> > > > >  			set_pageblock_migratetype(page, ac->migratetype);
> > > > >  			move_freepages_block(zone, page, ac->migratetype);
> > > > >  			spin_unlock_irqrestore(&zone->lock, flags);
> > > > > -			return;
> > > > > +			return true;
> > > > 
> > > > Such cut-off makes reserved pageblock remained before the OOM.
> > > > We call it as premature OOM kill.
> > > 
> > > Not sure I understand. The above should get rid of all atomic reserves
> > > before we go OOM. We can do it all at once but that sounds too
> > 
> > The problem is there is race between page freeing path and unreserve
> > logic so that some pages could be in highatomic free list even though
> > zone->nr_reserved_highatomic is already zero.
> 
> Does it make any sense to handle such an unlikely case?

I agree if it's really hard to solve but why should we remain
such hole in the algorithm if we can fix easily?

> 
> > So, at least, it would be better to have a draining step at some point
> > where was (no_progress_loops == MAX_RECLAIM RETRIES) in my patch.
> > 
> > Also, your patch makes retry loop greater than MAX_RECLAIM_RETRIES
> > if unreserve_highatomic_pageblock returns true. Theoretically,
> > it would make live lock. You might argue it's *really really* rare
> > but I don't want to add such subtle thing.
> > Maybe, we could drain when no_progress_loops == MAX_RECLAIM_RETRIES.
> 
> What would be the scenario when we would really livelock here? How can
> we have unreserve_highatomic_pageblock returning true for ever?

Other context freeing highorder page/reallocating repeatedly while
a process stucked direct reclaim is looping with should_reclaim_retry.

> 
> > > aggressive to me. If we just do one at the time we have a chance to
> > > keep some reserves if the OOM situation is really ephemeral.
> > > 
> > > Does this patch work in your usecase?
> > 
> > I didn't test but I guess it works but it has problems I mentioned
> > above. 
> 
> Please do not make this too over complicated and be practical. I do not
> really want to dismiss your usecase but I am really not convinced that
> such a "perfectly fit into all memory" situations are sustainable and
> justify to make the whole code more complex. I agree that we can at
> least try to do something to release those reserves but let's do it
> as simple as possible.

If you think it's too complicated, how about this?

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index fd91b8955b26..e3ce442e9976 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2098,7 +2098,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
  * intense memory pressure but failed atomic allocations should be easier
  * to recover from than an OOM.
  */
-static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
+static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
+						bool drain)
 {
 	struct zonelist *zonelist = ac->zonelist;
 	unsigned long flags;
@@ -2106,11 +2107,12 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 	struct zone *zone;
 	struct page *page;
 	int order;
+	bool ret = false;
 
 	for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx,
 								ac->nodemask) {
 		/* Preserve at least one pageblock */
-		if (zone->nr_reserved_highatomic <= pageblock_nr_pages)
+		if (!drain && zone->nr_reserved_highatomic <= pageblock_nr_pages)
 			continue;
 
 		spin_lock_irqsave(&zone->lock, flags);
@@ -2154,12 +2156,24 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
 			 * may increase.
 			 */
 			set_pageblock_migratetype(page, ac->migratetype);
-			move_freepages_block(zone, page, ac->migratetype);
-			spin_unlock_irqrestore(&zone->lock, flags);
-			return;
+			ret = move_freepages_block(zone, page,
+						ac->migratetype);
+			/*
+			 * By race with page freeing functions, !highatomic
+			 * pageblocks can have free pages in highatomic free
+			 * list so if drain is true, try to unreserve every
+			 * free pages in highatomic free list without bailing
+			 * out.
+			 */
+			if (!drain) {
+				spin_unlock_irqrestore(&zone->lock, flags);
+				return ret;
+			}
 		}
 		spin_unlock_irqrestore(&zone->lock, flags);
 	}
+
+	return ret;
 }
 
 /* Remove an element from the buddy allocator from the fallback list */
@@ -3358,7 +3372,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
 	 * Shrink them them and try again
 	 */
 	if (!page && !drained) {
-		unreserve_highatomic_pageblock(ac);
+		unreserve_highatomic_pageblock(ac, false);
 		drain_all_pages(NULL);
 		drained = true;
 		goto retry;
@@ -3475,8 +3489,11 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 	 * Make sure we converge to OOM if we cannot make any progress
 	 * several times in the row.
 	 */
-	if (*no_progress_loops > MAX_RECLAIM_RETRIES)
+	if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
+		if (unreserve_highatomic_pageblock(ac, true))
+			return true;
 		return false;
+	}
 
 	/*
 	 * Keep reclaiming pages while there is a chance this will lead
> 
> -- 
> Michal Hocko
> SUSE Labs
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
  2016-10-11  7:09               ` Minchan Kim
@ 2016-10-11  7:26                 ` Michal Hocko
  -1 siblings, 0 replies; 56+ messages in thread
From: Michal Hocko @ 2016-10-11  7:26 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Tue 11-10-16 16:09:45, Minchan Kim wrote:
> On Tue, Oct 11, 2016 at 08:50:48AM +0200, Michal Hocko wrote:
> > On Tue 11-10-16 14:01:41, Minchan Kim wrote:
[...]
> > > Also, your patch makes retry loop greater than MAX_RECLAIM_RETRIES
> > > if unreserve_highatomic_pageblock returns true. Theoretically,
> > > it would make live lock. You might argue it's *really really* rare
> > > but I don't want to add such subtle thing.
> > > Maybe, we could drain when no_progress_loops == MAX_RECLAIM_RETRIES.
> > 
> > What would be the scenario when we would really livelock here? How can
> > we have unreserve_highatomic_pageblock returning true for ever?
> 
> Other context freeing highorder page/reallocating repeatedly while
> a process stucked direct reclaim is looping with should_reclaim_retry.

If we unreserve those pages then we should converge to OOM. Btw. this
can happen even without highmem reserves. Heavy short lived allocations
might keep us looping at the lowest priority. They are just too unlikely
to care about.

> > > > aggressive to me. If we just do one at the time we have a chance to
> > > > keep some reserves if the OOM situation is really ephemeral.
> > > > 
> > > > Does this patch work in your usecase?
> > > 
> > > I didn't test but I guess it works but it has problems I mentioned
> > > above. 
> > 
> > Please do not make this too over complicated and be practical. I do not
> > really want to dismiss your usecase but I am really not convinced that
> > such a "perfectly fit into all memory" situations are sustainable and
> > justify to make the whole code more complex. I agree that we can at
> > least try to do something to release those reserves but let's do it
> > as simple as possible.
> 
> If you think it's too complicated, how about this?

Definitely better than the original patch. Little bit too aggressive
because we could really go with one block at the time. But this is a
minor thing and easily fixable...

> @@ -2154,12 +2156,24 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  			 * may increase.
>  			 */
>  			set_pageblock_migratetype(page, ac->migratetype);
> -			move_freepages_block(zone, page, ac->migratetype);
> -			spin_unlock_irqrestore(&zone->lock, flags);
> -			return;
> +			ret = move_freepages_block(zone, page,
> +						ac->migratetype);
> +			/*
> +			 * By race with page freeing functions, !highatomic
> +			 * pageblocks can have free pages in highatomic free
> +			 * list so if drain is true, try to unreserve every
> +			 * free pages in highatomic free list without bailing
> +			 * out.
> +			 */
> +			if (!drain) {

			if (ret)
> +				spin_unlock_irqrestore(&zone->lock, flags);
> +				return ret;
> +			}

arguably this would work better also for !drain case which currently
tries to unreserve but in case of the race it would do nothing.

>  		}
>  		spin_unlock_irqrestore(&zone->lock, flags);
>  	}
> +
> +	return ret;
>  }
>  
>  /* Remove an element from the buddy allocator from the fallback list */
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
@ 2016-10-11  7:26                 ` Michal Hocko
  0 siblings, 0 replies; 56+ messages in thread
From: Michal Hocko @ 2016-10-11  7:26 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Tue 11-10-16 16:09:45, Minchan Kim wrote:
> On Tue, Oct 11, 2016 at 08:50:48AM +0200, Michal Hocko wrote:
> > On Tue 11-10-16 14:01:41, Minchan Kim wrote:
[...]
> > > Also, your patch makes retry loop greater than MAX_RECLAIM_RETRIES
> > > if unreserve_highatomic_pageblock returns true. Theoretically,
> > > it would make live lock. You might argue it's *really really* rare
> > > but I don't want to add such subtle thing.
> > > Maybe, we could drain when no_progress_loops == MAX_RECLAIM_RETRIES.
> > 
> > What would be the scenario when we would really livelock here? How can
> > we have unreserve_highatomic_pageblock returning true for ever?
> 
> Other context freeing highorder page/reallocating repeatedly while
> a process stucked direct reclaim is looping with should_reclaim_retry.

If we unreserve those pages then we should converge to OOM. Btw. this
can happen even without highmem reserves. Heavy short lived allocations
might keep us looping at the lowest priority. They are just too unlikely
to care about.

> > > > aggressive to me. If we just do one at the time we have a chance to
> > > > keep some reserves if the OOM situation is really ephemeral.
> > > > 
> > > > Does this patch work in your usecase?
> > > 
> > > I didn't test but I guess it works but it has problems I mentioned
> > > above. 
> > 
> > Please do not make this too over complicated and be practical. I do not
> > really want to dismiss your usecase but I am really not convinced that
> > such a "perfectly fit into all memory" situations are sustainable and
> > justify to make the whole code more complex. I agree that we can at
> > least try to do something to release those reserves but let's do it
> > as simple as possible.
> 
> If you think it's too complicated, how about this?

Definitely better than the original patch. Little bit too aggressive
because we could really go with one block at the time. But this is a
minor thing and easily fixable...

> @@ -2154,12 +2156,24 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
>  			 * may increase.
>  			 */
>  			set_pageblock_migratetype(page, ac->migratetype);
> -			move_freepages_block(zone, page, ac->migratetype);
> -			spin_unlock_irqrestore(&zone->lock, flags);
> -			return;
> +			ret = move_freepages_block(zone, page,
> +						ac->migratetype);
> +			/*
> +			 * By race with page freeing functions, !highatomic
> +			 * pageblocks can have free pages in highatomic free
> +			 * list so if drain is true, try to unreserve every
> +			 * free pages in highatomic free list without bailing
> +			 * out.
> +			 */
> +			if (!drain) {

			if (ret)
> +				spin_unlock_irqrestore(&zone->lock, flags);
> +				return ret;
> +			}

arguably this would work better also for !drain case which currently
tries to unreserve but in case of the race it would do nothing.

>  		}
>  		spin_unlock_irqrestore(&zone->lock, flags);
>  	}
> +
> +	return ret;
>  }
>  
>  /* Remove an element from the buddy allocator from the fallback list */
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
  2016-10-11  7:26                 ` Michal Hocko
@ 2016-10-11  7:37                   ` Minchan Kim
  -1 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-11  7:37 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Tue, Oct 11, 2016 at 09:26:06AM +0200, Michal Hocko wrote:
> On Tue 11-10-16 16:09:45, Minchan Kim wrote:
> > On Tue, Oct 11, 2016 at 08:50:48AM +0200, Michal Hocko wrote:
> > > On Tue 11-10-16 14:01:41, Minchan Kim wrote:
> [...]
> > > > Also, your patch makes retry loop greater than MAX_RECLAIM_RETRIES
> > > > if unreserve_highatomic_pageblock returns true. Theoretically,
> > > > it would make live lock. You might argue it's *really really* rare
> > > > but I don't want to add such subtle thing.
> > > > Maybe, we could drain when no_progress_loops == MAX_RECLAIM_RETRIES.
> > > 
> > > What would be the scenario when we would really livelock here? How can
> > > we have unreserve_highatomic_pageblock returning true for ever?
> > 
> > Other context freeing highorder page/reallocating repeatedly while
> > a process stucked direct reclaim is looping with should_reclaim_retry.
> 
> If we unreserve those pages then we should converge to OOM. Btw. this
> can happen even without highmem reserves. Heavy short lived allocations
> might keep us looping at the lowest priority. They are just too unlikely
> to care about.

Indeed.
> 
> > > > > aggressive to me. If we just do one at the time we have a chance to
> > > > > keep some reserves if the OOM situation is really ephemeral.
> > > > > 
> > > > > Does this patch work in your usecase?
> > > > 
> > > > I didn't test but I guess it works but it has problems I mentioned
> > > > above. 
> > > 
> > > Please do not make this too over complicated and be practical. I do not
> > > really want to dismiss your usecase but I am really not convinced that
> > > such a "perfectly fit into all memory" situations are sustainable and
> > > justify to make the whole code more complex. I agree that we can at
> > > least try to do something to release those reserves but let's do it
> > > as simple as possible.
> > 
> > If you think it's too complicated, how about this?
> 
> Definitely better than the original patch. Little bit too aggressive
> because we could really go with one block at the time. But this is a
> minor thing and easily fixable...
> 
> > @@ -2154,12 +2156,24 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> >  			 * may increase.
> >  			 */
> >  			set_pageblock_migratetype(page, ac->migratetype);
> > -			move_freepages_block(zone, page, ac->migratetype);
> > -			spin_unlock_irqrestore(&zone->lock, flags);
> > -			return;
> > +			ret = move_freepages_block(zone, page,
> > +						ac->migratetype);
> > +			/*
> > +			 * By race with page freeing functions, !highatomic
> > +			 * pageblocks can have free pages in highatomic free
> > +			 * list so if drain is true, try to unreserve every
> > +			 * free pages in highatomic free list without bailing
> > +			 * out.
> > +			 */
> > +			if (!drain) {
> 
> 			if (ret)
> > +				spin_unlock_irqrestore(&zone->lock, flags);
> > +				return ret;
> > +			}
> 
> arguably this would work better also for !drain case which currently
> tries to unreserve but in case of the race it would do nothing.

I thought it but I was afraid if you say again it's over complicated.
I will do it with your SOB in next spin.

Thanks, Michal.

> 
> >  		}
> >  		spin_unlock_irqrestore(&zone->lock, flags);
> >  	}
> > +
> > +	return ret;
> >  }
> >  
> >  /* Remove an element from the buddy allocator from the fallback list */
> -- 
> Michal Hocko
> SUSE Labs
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
@ 2016-10-11  7:37                   ` Minchan Kim
  0 siblings, 0 replies; 56+ messages in thread
From: Minchan Kim @ 2016-10-11  7:37 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Tue, Oct 11, 2016 at 09:26:06AM +0200, Michal Hocko wrote:
> On Tue 11-10-16 16:09:45, Minchan Kim wrote:
> > On Tue, Oct 11, 2016 at 08:50:48AM +0200, Michal Hocko wrote:
> > > On Tue 11-10-16 14:01:41, Minchan Kim wrote:
> [...]
> > > > Also, your patch makes retry loop greater than MAX_RECLAIM_RETRIES
> > > > if unreserve_highatomic_pageblock returns true. Theoretically,
> > > > it would make live lock. You might argue it's *really really* rare
> > > > but I don't want to add such subtle thing.
> > > > Maybe, we could drain when no_progress_loops == MAX_RECLAIM_RETRIES.
> > > 
> > > What would be the scenario when we would really livelock here? How can
> > > we have unreserve_highatomic_pageblock returning true for ever?
> > 
> > Other context freeing highorder page/reallocating repeatedly while
> > a process stucked direct reclaim is looping with should_reclaim_retry.
> 
> If we unreserve those pages then we should converge to OOM. Btw. this
> can happen even without highmem reserves. Heavy short lived allocations
> might keep us looping at the lowest priority. They are just too unlikely
> to care about.

Indeed.
> 
> > > > > aggressive to me. If we just do one at the time we have a chance to
> > > > > keep some reserves if the OOM situation is really ephemeral.
> > > > > 
> > > > > Does this patch work in your usecase?
> > > > 
> > > > I didn't test but I guess it works but it has problems I mentioned
> > > > above. 
> > > 
> > > Please do not make this too over complicated and be practical. I do not
> > > really want to dismiss your usecase but I am really not convinced that
> > > such a "perfectly fit into all memory" situations are sustainable and
> > > justify to make the whole code more complex. I agree that we can at
> > > least try to do something to release those reserves but let's do it
> > > as simple as possible.
> > 
> > If you think it's too complicated, how about this?
> 
> Definitely better than the original patch. Little bit too aggressive
> because we could really go with one block at the time. But this is a
> minor thing and easily fixable...
> 
> > @@ -2154,12 +2156,24 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> >  			 * may increase.
> >  			 */
> >  			set_pageblock_migratetype(page, ac->migratetype);
> > -			move_freepages_block(zone, page, ac->migratetype);
> > -			spin_unlock_irqrestore(&zone->lock, flags);
> > -			return;
> > +			ret = move_freepages_block(zone, page,
> > +						ac->migratetype);
> > +			/*
> > +			 * By race with page freeing functions, !highatomic
> > +			 * pageblocks can have free pages in highatomic free
> > +			 * list so if drain is true, try to unreserve every
> > +			 * free pages in highatomic free list without bailing
> > +			 * out.
> > +			 */
> > +			if (!drain) {
> 
> 			if (ret)
> > +				spin_unlock_irqrestore(&zone->lock, flags);
> > +				return ret;
> > +			}
> 
> arguably this would work better also for !drain case which currently
> tries to unreserve but in case of the race it would do nothing.

I thought it but I was afraid if you say again it's over complicated.
I will do it with your SOB in next spin.

Thanks, Michal.

> 
> >  		}
> >  		spin_unlock_irqrestore(&zone->lock, flags);
> >  	}
> > +
> > +	return ret;
> >  }
> >  
> >  /* Remove an element from the buddy allocator from the fallback list */
> -- 
> Michal Hocko
> SUSE Labs
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
  2016-10-11  7:37                   ` Minchan Kim
@ 2016-10-11  8:01                     ` Michal Hocko
  -1 siblings, 0 replies; 56+ messages in thread
From: Michal Hocko @ 2016-10-11  8:01 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Tue 11-10-16 16:37:16, Minchan Kim wrote:
> On Tue, Oct 11, 2016 at 09:26:06AM +0200, Michal Hocko wrote:
> > On Tue 11-10-16 16:09:45, Minchan Kim wrote:
[...]
> > > @@ -2154,12 +2156,24 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> > >  			 * may increase.
> > >  			 */
> > >  			set_pageblock_migratetype(page, ac->migratetype);
> > > -			move_freepages_block(zone, page, ac->migratetype);
> > > -			spin_unlock_irqrestore(&zone->lock, flags);
> > > -			return;
> > > +			ret = move_freepages_block(zone, page,
> > > +						ac->migratetype);
> > > +			/*
> > > +			 * By race with page freeing functions, !highatomic
> > > +			 * pageblocks can have free pages in highatomic free
> > > +			 * list so if drain is true, try to unreserve every
> > > +			 * free pages in highatomic free list without bailing
> > > +			 * out.
> > > +			 */
> > > +			if (!drain) {
> > 
> > 			if (ret)
> > > +				spin_unlock_irqrestore(&zone->lock, flags);
> > > +				return ret;
> > > +			}
> > 
> > arguably this would work better also for !drain case which currently
> > tries to unreserve but in case of the race it would do nothing.
> 
> I thought it but I was afraid if you say again it's over complicated.

Well, maybe there is even better/easier solution. Anyway, if
I were you I would just split it into two patches. The first
to unreserve from shoudl_reclaim_retry and the later to make
unreserve_highatomic_pageblock more reliable.

> I will do it with your SOB in next spin.

ok, thanks!
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM
@ 2016-10-11  8:01                     ` Michal Hocko
  0 siblings, 0 replies; 56+ messages in thread
From: Michal Hocko @ 2016-10-11  8:01 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Vlastimil Babka, Joonsoo Kim,
	linux-kernel, linux-mm, Sangseok Lee

On Tue 11-10-16 16:37:16, Minchan Kim wrote:
> On Tue, Oct 11, 2016 at 09:26:06AM +0200, Michal Hocko wrote:
> > On Tue 11-10-16 16:09:45, Minchan Kim wrote:
[...]
> > > @@ -2154,12 +2156,24 @@ static void unreserve_highatomic_pageblock(const struct alloc_context *ac)
> > >  			 * may increase.
> > >  			 */
> > >  			set_pageblock_migratetype(page, ac->migratetype);
> > > -			move_freepages_block(zone, page, ac->migratetype);
> > > -			spin_unlock_irqrestore(&zone->lock, flags);
> > > -			return;
> > > +			ret = move_freepages_block(zone, page,
> > > +						ac->migratetype);
> > > +			/*
> > > +			 * By race with page freeing functions, !highatomic
> > > +			 * pageblocks can have free pages in highatomic free
> > > +			 * list so if drain is true, try to unreserve every
> > > +			 * free pages in highatomic free list without bailing
> > > +			 * out.
> > > +			 */
> > > +			if (!drain) {
> > 
> > 			if (ret)
> > > +				spin_unlock_irqrestore(&zone->lock, flags);
> > > +				return ret;
> > > +			}
> > 
> > arguably this would work better also for !drain case which currently
> > tries to unreserve but in case of the race it would do nothing.
> 
> I thought it but I was afraid if you say again it's over complicated.

Well, maybe there is even better/easier solution. Anyway, if
I were you I would just split it into two patches. The first
to unreserve from shoudl_reclaim_retry and the later to make
unreserve_highatomic_pageblock more reliable.

> I will do it with your SOB in next spin.

ok, thanks!
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/4] mm: adjust reserved highatomic count
  2016-10-11  4:19           ` Minchan Kim
@ 2016-10-11  9:40             ` Vlastimil Babka
  -1 siblings, 0 replies; 56+ messages in thread
From: Vlastimil Babka @ 2016-10-11  9:40 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee

On 10/11/2016 06:19 AM, Minchan Kim wrote:
> Hi Vlasimil,
>
> On Mon, Oct 10, 2016 at 08:57:40AM +0200, Vlastimil Babka wrote:
>> On 10/07/2016 04:29 PM, Minchan Kim wrote:
>> >>>In that case, we should adjust nr_reserved_highatomic.
>> >>>Otherwise, VM cannot reserve highorderatomic pageblocks any more
>> >>>although it doesn't reach 1% limit. It means highorder atomic
>> >>>allocation failure would be higher.
>> >>>
>> >>>So, this patch decreases the account as well as migratetype
>> >>>if it was MIGRATE_HIGHATOMIC.
>> >>>
>> >>>Signed-off-by: Minchan Kim <minchan@kernel.org>
>> >>
>> >>Hm wouldn't it be simpler just to prevent the pageblock's migratetype to be
>> >>changed if it's highatomic? Possibly also not do move_freepages_block() in
>> >
>> >It could be. Actually, I did it with modifying can_steal_fallback which returns
>> >false it found the pageblock is highorderatomic but changed to this way again
>> >because I don't have any justification to prevent changing pageblock.
>> >If you give concrete justification so others isn't against on it, I am happy to
>> >do what you suggested.
>>
>> Well, MIGRATE_HIGHATOMIC is not listed in the fallbacks array at all, so we
>> are not supposed to steal from it in the first place. Stealing will only
>> happen due to races, which would be too costly to close, so we allow them
>> and expect to be rare. But we shouldn't allow them to break the accounting.
>>
>
> Fair enough.
> How about this?

Look sgood.

> From 4a0b6a74ebf1af7f90720b0028da49e2e2a2b679 Mon Sep 17 00:00:00 2001
> From: Minchan Kim <minchan@kernel.org>
> Date: Thu, 6 Oct 2016 13:38:35 +0900
> Subject: [PATCH] mm: don't steal highatomic pageblock
>
> In page freeing path, migratetype is racy so that a highorderatomic
> page could free into non-highorderatomic free list. If that page
> is allocated, VM can change the pageblock from higorderatomic to
> something. In that case, highatomic pageblock accounting is broken
> so it doesn't work(e.g., VM cannot reserve highorderatomic pageblocks
> any more although it doesn't reach 1% limit).
>
> So, this patch prohibits the changing from highatomic to other type.
> It's no problem because MIGRATE_HIGHATOMIC is not listed in fallback
> array so stealing will only happen due to unexpected races which is
> really rare. Also, such prohibiting keeps highatomic pageblock more
> longer so it would be better for highorderatomic page allocation.
>
> Signed-off-by: Minchan Kim <minchan@kernel.org>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  mm/page_alloc.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 55ad0229ebf3..79853b258211 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2154,7 +2154,8 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
>
>  		page = list_first_entry(&area->free_list[fallback_mt],
>  						struct page, lru);
> -		if (can_steal)
> +		if (can_steal &&
> +			get_pageblock_migratetype(page) != MIGRATE_HIGHATOMIC)
>  			steal_suitable_fallback(zone, page, start_migratetype);
>
>  		/* Remove the page from the freelists */
> @@ -2555,7 +2556,8 @@ int __isolate_free_page(struct page *page, unsigned int order)
>  		struct page *endpage = page + (1 << order) - 1;
>  		for (; page < endpage; page += pageblock_nr_pages) {
>  			int mt = get_pageblock_migratetype(page);
> -			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt))
> +			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt)
> +				&& mt != MIGRATE_HIGHATOMIC)
>  				set_pageblock_migratetype(page,
>  							  MIGRATE_MOVABLE);
>  		}
>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/4] mm: adjust reserved highatomic count
@ 2016-10-11  9:40             ` Vlastimil Babka
  0 siblings, 0 replies; 56+ messages in thread
From: Vlastimil Babka @ 2016-10-11  9:40 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Mel Gorman, Joonsoo Kim, linux-kernel, linux-mm,
	Sangseok Lee

On 10/11/2016 06:19 AM, Minchan Kim wrote:
> Hi Vlasimil,
>
> On Mon, Oct 10, 2016 at 08:57:40AM +0200, Vlastimil Babka wrote:
>> On 10/07/2016 04:29 PM, Minchan Kim wrote:
>> >>>In that case, we should adjust nr_reserved_highatomic.
>> >>>Otherwise, VM cannot reserve highorderatomic pageblocks any more
>> >>>although it doesn't reach 1% limit. It means highorder atomic
>> >>>allocation failure would be higher.
>> >>>
>> >>>So, this patch decreases the account as well as migratetype
>> >>>if it was MIGRATE_HIGHATOMIC.
>> >>>
>> >>>Signed-off-by: Minchan Kim <minchan@kernel.org>
>> >>
>> >>Hm wouldn't it be simpler just to prevent the pageblock's migratetype to be
>> >>changed if it's highatomic? Possibly also not do move_freepages_block() in
>> >
>> >It could be. Actually, I did it with modifying can_steal_fallback which returns
>> >false it found the pageblock is highorderatomic but changed to this way again
>> >because I don't have any justification to prevent changing pageblock.
>> >If you give concrete justification so others isn't against on it, I am happy to
>> >do what you suggested.
>>
>> Well, MIGRATE_HIGHATOMIC is not listed in the fallbacks array at all, so we
>> are not supposed to steal from it in the first place. Stealing will only
>> happen due to races, which would be too costly to close, so we allow them
>> and expect to be rare. But we shouldn't allow them to break the accounting.
>>
>
> Fair enough.
> How about this?

Look sgood.

> From 4a0b6a74ebf1af7f90720b0028da49e2e2a2b679 Mon Sep 17 00:00:00 2001
> From: Minchan Kim <minchan@kernel.org>
> Date: Thu, 6 Oct 2016 13:38:35 +0900
> Subject: [PATCH] mm: don't steal highatomic pageblock
>
> In page freeing path, migratetype is racy so that a highorderatomic
> page could free into non-highorderatomic free list. If that page
> is allocated, VM can change the pageblock from higorderatomic to
> something. In that case, highatomic pageblock accounting is broken
> so it doesn't work(e.g., VM cannot reserve highorderatomic pageblocks
> any more although it doesn't reach 1% limit).
>
> So, this patch prohibits the changing from highatomic to other type.
> It's no problem because MIGRATE_HIGHATOMIC is not listed in fallback
> array so stealing will only happen due to unexpected races which is
> really rare. Also, such prohibiting keeps highatomic pageblock more
> longer so it would be better for highorderatomic page allocation.
>
> Signed-off-by: Minchan Kim <minchan@kernel.org>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  mm/page_alloc.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 55ad0229ebf3..79853b258211 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2154,7 +2154,8 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
>
>  		page = list_first_entry(&area->free_list[fallback_mt],
>  						struct page, lru);
> -		if (can_steal)
> +		if (can_steal &&
> +			get_pageblock_migratetype(page) != MIGRATE_HIGHATOMIC)
>  			steal_suitable_fallback(zone, page, start_migratetype);
>
>  		/* Remove the page from the freelists */
> @@ -2555,7 +2556,8 @@ int __isolate_free_page(struct page *page, unsigned int order)
>  		struct page *endpage = page + (1 << order) - 1;
>  		for (; page < endpage; page += pageblock_nr_pages) {
>  			int mt = get_pageblock_migratetype(page);
> -			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt))
> +			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt)
> +				&& mt != MIGRATE_HIGHATOMIC)
>  				set_pageblock_migratetype(page,
>  							  MIGRATE_MOVABLE);
>  		}
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/4] mm: adjust reserved highatomic count
  2016-10-11  4:19           ` Minchan Kim
@ 2016-10-12  5:36             ` Mel Gorman
  -1 siblings, 0 replies; 56+ messages in thread
From: Mel Gorman @ 2016-10-12  5:36 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Vlastimil Babka, Andrew Morton, Joonsoo Kim, linux-kernel,
	linux-mm, Sangseok Lee

On Tue, Oct 11, 2016 at 01:19:16PM +0900, Minchan Kim wrote:
> From 4a0b6a74ebf1af7f90720b0028da49e2e2a2b679 Mon Sep 17 00:00:00 2001
> From: Minchan Kim <minchan@kernel.org>
> Date: Thu, 6 Oct 2016 13:38:35 +0900
> Subject: [PATCH] mm: don't steal highatomic pageblock
> 
> In page freeing path, migratetype is racy so that a highorderatomic
> page could free into non-highorderatomic free list. If that page
> is allocated, VM can change the pageblock from higorderatomic to
> something. In that case, highatomic pageblock accounting is broken
> so it doesn't work(e.g., VM cannot reserve highorderatomic pageblocks
> any more although it doesn't reach 1% limit).
> 
> So, this patch prohibits the changing from highatomic to other type.
> It's no problem because MIGRATE_HIGHATOMIC is not listed in fallback
> array so stealing will only happen due to unexpected races which is
> really rare. Also, such prohibiting keeps highatomic pageblock more
> longer so it would be better for highorderatomic page allocation.
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  mm/page_alloc.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 55ad0229ebf3..79853b258211 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2154,7 +2154,8 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
>  
>  		page = list_first_entry(&area->free_list[fallback_mt],
>  						struct page, lru);
> -		if (can_steal)
> +		if (can_steal &&
> +			get_pageblock_migratetype(page) != MIGRATE_HIGHATOMIC)
>  			steal_suitable_fallback(zone, page, start_migratetype);
>  
>  		/* Remove the page from the freelists */
> @@ -2555,7 +2556,8 @@ int __isolate_free_page(struct page *page, unsigned int order)
>  		struct page *endpage = page + (1 << order) - 1;
>  		for (; page < endpage; page += pageblock_nr_pages) {
>  			int mt = get_pageblock_migratetype(page);
> -			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt))
> +			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt)
> +				&& mt != MIGRATE_HIGHATOMIC)
>  				set_pageblock_migratetype(page,
>  							  MIGRATE_MOVABLE);
>  		}

Acked-by: Mel Gorman <mgorman@techsingularity.net>

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 1/4] mm: adjust reserved highatomic count
@ 2016-10-12  5:36             ` Mel Gorman
  0 siblings, 0 replies; 56+ messages in thread
From: Mel Gorman @ 2016-10-12  5:36 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Vlastimil Babka, Andrew Morton, Joonsoo Kim, linux-kernel,
	linux-mm, Sangseok Lee

On Tue, Oct 11, 2016 at 01:19:16PM +0900, Minchan Kim wrote:
> From 4a0b6a74ebf1af7f90720b0028da49e2e2a2b679 Mon Sep 17 00:00:00 2001
> From: Minchan Kim <minchan@kernel.org>
> Date: Thu, 6 Oct 2016 13:38:35 +0900
> Subject: [PATCH] mm: don't steal highatomic pageblock
> 
> In page freeing path, migratetype is racy so that a highorderatomic
> page could free into non-highorderatomic free list. If that page
> is allocated, VM can change the pageblock from higorderatomic to
> something. In that case, highatomic pageblock accounting is broken
> so it doesn't work(e.g., VM cannot reserve highorderatomic pageblocks
> any more although it doesn't reach 1% limit).
> 
> So, this patch prohibits the changing from highatomic to other type.
> It's no problem because MIGRATE_HIGHATOMIC is not listed in fallback
> array so stealing will only happen due to unexpected races which is
> really rare. Also, such prohibiting keeps highatomic pageblock more
> longer so it would be better for highorderatomic page allocation.
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  mm/page_alloc.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 55ad0229ebf3..79853b258211 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2154,7 +2154,8 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
>  
>  		page = list_first_entry(&area->free_list[fallback_mt],
>  						struct page, lru);
> -		if (can_steal)
> +		if (can_steal &&
> +			get_pageblock_migratetype(page) != MIGRATE_HIGHATOMIC)
>  			steal_suitable_fallback(zone, page, start_migratetype);
>  
>  		/* Remove the page from the freelists */
> @@ -2555,7 +2556,8 @@ int __isolate_free_page(struct page *page, unsigned int order)
>  		struct page *endpage = page + (1 << order) - 1;
>  		for (; page < endpage; page += pageblock_nr_pages) {
>  			int mt = get_pageblock_migratetype(page);
> -			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt))
> +			if (!is_migrate_isolate(mt) && !is_migrate_cma(mt)
> +				&& mt != MIGRATE_HIGHATOMIC)
>  				set_pageblock_migratetype(page,
>  							  MIGRATE_MOVABLE);
>  		}

Acked-by: Mel Gorman <mgorman@techsingularity.net>

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 2/4] mm: prevent double decrease of nr_reserved_highatomic
  2016-10-07  5:45   ` Minchan Kim
@ 2016-10-12  5:36     ` Mel Gorman
  -1 siblings, 0 replies; 56+ messages in thread
From: Mel Gorman @ 2016-10-12  5:36 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Vlastimil Babka, Joonsoo Kim, linux-kernel,
	linux-mm, Sangseok Lee

On Fri, Oct 07, 2016 at 02:45:34PM +0900, Minchan Kim wrote:
> There is race between page freeing and unreserved highatomic.
> 
>  CPU 0				    CPU 1
> 
>     free_hot_cold_page
>       mt = get_pfnblock_migratetype
>       set_pcppage_migratetype(page, mt)
>     				    unreserve_highatomic_pageblock
>     				    spin_lock_irqsave(&zone->lock)
>     				    move_freepages_block
>     				    set_pageblock_migratetype(page)
>     				    spin_unlock_irqrestore(&zone->lock)
>       free_pcppages_bulk
>         __free_one_page(mt) <- mt is stale
> 
> By above race, a page on CPU 0 could go non-highorderatomic free list
> since the pageblock's type is changed. By that, unreserve logic of
> highorderatomic can decrease reserved count on a same pageblock
> several times and then it will make mismatch between
> nr_reserved_highatomic and the number of reserved pageblock.
> 
> So, this patch verifies whether the pageblock is highatomic or not
> and decrease the count only if the pageblock is highatomic.
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>

Acked-by: Mel Gorman <mgorman@techsingularity.net>

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 56+ messages in thread

* Re: [PATCH 2/4] mm: prevent double decrease of nr_reserved_highatomic
@ 2016-10-12  5:36     ` Mel Gorman
  0 siblings, 0 replies; 56+ messages in thread
From: Mel Gorman @ 2016-10-12  5:36 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, Vlastimil Babka, Joonsoo Kim, linux-kernel,
	linux-mm, Sangseok Lee

On Fri, Oct 07, 2016 at 02:45:34PM +0900, Minchan Kim wrote:
> There is race between page freeing and unreserved highatomic.
> 
>  CPU 0				    CPU 1
> 
>     free_hot_cold_page
>       mt = get_pfnblock_migratetype
>       set_pcppage_migratetype(page, mt)
>     				    unreserve_highatomic_pageblock
>     				    spin_lock_irqsave(&zone->lock)
>     				    move_freepages_block
>     				    set_pageblock_migratetype(page)
>     				    spin_unlock_irqrestore(&zone->lock)
>       free_pcppages_bulk
>         __free_one_page(mt) <- mt is stale
> 
> By above race, a page on CPU 0 could go non-highorderatomic free list
> since the pageblock's type is changed. By that, unreserve logic of
> highorderatomic can decrease reserved count on a same pageblock
> several times and then it will make mismatch between
> nr_reserved_highatomic and the number of reserved pageblock.
> 
> So, this patch verifies whether the pageblock is highatomic or not
> and decrease the count only if the pageblock is highatomic.
> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>

Acked-by: Mel Gorman <mgorman@techsingularity.net>

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 56+ messages in thread

end of thread, other threads:[~2016-10-12  6:54 UTC | newest]

Thread overview: 56+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-10-07  5:45 [PATCH 0/4] use up highorder free pages before OOM Minchan Kim
2016-10-07  5:45 ` Minchan Kim
2016-10-07  5:45 ` [PATCH 1/4] mm: adjust reserved highatomic count Minchan Kim
2016-10-07  5:45   ` Minchan Kim
2016-10-07 12:30   ` Vlastimil Babka
2016-10-07 12:30     ` Vlastimil Babka
2016-10-07 14:29     ` Minchan Kim
2016-10-07 14:29       ` Minchan Kim
2016-10-10  6:57       ` Vlastimil Babka
2016-10-10  6:57         ` Vlastimil Babka
2016-10-11  4:19         ` Minchan Kim
2016-10-11  4:19           ` Minchan Kim
2016-10-11  9:40           ` Vlastimil Babka
2016-10-11  9:40             ` Vlastimil Babka
2016-10-12  5:36           ` Mel Gorman
2016-10-12  5:36             ` Mel Gorman
2016-10-07  5:45 ` [PATCH 2/4] mm: prevent double decrease of nr_reserved_highatomic Minchan Kim
2016-10-07  5:45   ` Minchan Kim
2016-10-07 12:44   ` Vlastimil Babka
2016-10-07 12:44     ` Vlastimil Babka
2016-10-07 14:30     ` Minchan Kim
2016-10-07 14:30       ` Minchan Kim
2016-10-12  5:36   ` Mel Gorman
2016-10-12  5:36     ` Mel Gorman
2016-10-07  5:45 ` [PATCH 3/4] mm: unreserve highatomic free pages fully before OOM Minchan Kim
2016-10-07  5:45   ` Minchan Kim
2016-10-07  9:09   ` Michal Hocko
2016-10-07  9:09     ` Michal Hocko
2016-10-07 14:43     ` Minchan Kim
2016-10-07 14:43       ` Minchan Kim
2016-10-10  7:41       ` Michal Hocko
2016-10-10  7:41         ` Michal Hocko
2016-10-11  5:01         ` Minchan Kim
2016-10-11  5:01           ` Minchan Kim
2016-10-11  6:50           ` Michal Hocko
2016-10-11  6:50             ` Michal Hocko
2016-10-11  7:09             ` Minchan Kim
2016-10-11  7:09               ` Minchan Kim
2016-10-11  7:26               ` Michal Hocko
2016-10-11  7:26                 ` Michal Hocko
2016-10-11  7:37                 ` Minchan Kim
2016-10-11  7:37                   ` Minchan Kim
2016-10-11  8:01                   ` Michal Hocko
2016-10-11  8:01                     ` Michal Hocko
2016-10-07  5:45 ` [PATCH 4/4] mm: skip to reserve pageblock crossed zone boundary for HIGHATOMIC Minchan Kim
2016-10-07  5:45   ` Minchan Kim
2016-10-07  9:16 ` [PATCH 0/4] use up highorder free pages before OOM Michal Hocko
2016-10-07  9:16   ` Michal Hocko
2016-10-07 15:04   ` Minchan Kim
2016-10-07 15:04     ` Minchan Kim
2016-10-10  7:47     ` Michal Hocko
2016-10-10  7:47       ` Michal Hocko
2016-10-11  5:06       ` Minchan Kim
2016-10-11  5:06         ` Minchan Kim
2016-10-11  6:53         ` Michal Hocko
2016-10-11  6:53           ` Michal Hocko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.