All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC 0/3] OOM detection rework v2
@ 2015-11-18 13:03 Michal Hocko
  2015-11-18 13:03 ` [RFC 1/3] mm, oom: refactor oom detection Michal Hocko
                   ` (3 more replies)
  0 siblings, 4 replies; 22+ messages in thread
From: Michal Hocko @ 2015-11-18 13:03 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki

Hi,
this is a second version of the the patchset. The previous version was
posted here [1]. Changes since the last time are not really huge. The
backoff calculation was de-obfuscated by using DIV_ROUND_UP and one
theoretical bug for __GFP_NOFAIL and costly requests was fixed.

As pointed by Linus [2][3] relying on zone_reclaimable as a way to
communicate the reclaim progress is rater dubious. I tend to agree,
not only it is really obscure, it is not hard to imagine cases where a
single page freed in the loop keeps all the reclaimers looping without
getting any progress because their gfp_mask wouldn't allow to get that
page anyway (e.g. single GFP_ATOMIC alloc and free loop). This is rather
rare so it doesn't happen in the practice but the current logic which we
have is rather obscure and hard to follow a also non-deterministic.

This is an attempt to make the OOM detection more deterministic and
easier to follow because each reclaimer basically tracks its own
progress which is implemented at the page allocator layer rather spread
out between the allocator and the reclaim. The more on the implementation
is described in the first patch.

I have tested several different scenarios but it should be clear that
testing OOM killer is quite hard to be representative. There is usually
a tiny gap between almost OOM and full blown OOM which is often time
sensitive. Anyway, I have tested the following 3 scenarios and I would
appreciate if there are more to test.

Testing environment: a virtual machine with 2G of RAM and 2CPUs without
any swap to make the OOM more deterministic.

1) 2 writers (each doing dd with 4M blocks to an xfs partition with 1G size,
   removes the files and starts over again) running in parallel for 10s
   to build up a lot of dirty pages when 100 parallel mem_eaters (anon
   private populated mmap which waits until it gets signal) with 80M
   each.

   This causes an OOM flood of course and I have compared both patched
   and unpatched kernels. The test is considered finished after there
   are no OOM conditions detected. This should tell us whether there are
   any excessive kills or some of them premature:

* base kernel
$ grep "Killed process" base-oom-run.log | tail -n1
[  836.589319] Killed process 3035 (mem_eater) total-vm:85852kB, anon-rss:81996kB, file-rss:344kB
$ grep "invoked oom-killer" base-oom-run.log | wc -l
78
$ grep "DMA32.*all_unreclaimable? no" base-oom-run.log | wc -l
0

* patched kernel
$ grep "Killed process" patched-oom-run.log | tail -n1
[  843.281009] Killed process 2998 (mem_eater) total-vm:85852kB, anon-rss:82000kB, file-rss:4kB
$ grep "invoked oom-killer" patched-oom-run.log | wc -l
77
$ grep "DMA32.*all_unreclaimable? no" patched-oom-run.log | wc -l
0

So they have finished in a comparable time and killed the very similar number
of processes and there doesn't seem to be any case where the patched kernel
would have DMA32 zone considered reclaimable.

2) 2 writers again with 10s of run and then 10 mem_eaters to consume as much
   memory as possible without triggering the OOM killer. This required a lot
   of tuning but I've considered 3 consecutive runs without OOM as a success.

* base kernel
size=$(awk '/MemFree/{printf "%dK", ($2/10)-(14*1024)}' /proc/meminfo)

* patched kernel
size=$(awk '/MemFree/{printf "%dK", ($2/10)-(7500)}' /proc/meminfo)

So it seems that the patched kernel handled the low mem conditions better and
fired OOM killer later.

3) Costly high-order allocations with a limited amount of memory.
   Start 10 memeaters in parallel each with
   size=$(awk '/MemTotal/{printf "%d\n", $2/10}' /proc/meminfo)
   This will cause an OOM killer which will kill one of them which will free up
   200M and then try to use all the remaining space for hugetlb pages. See how
   many of them will pass kill everything, wait 2s and try again.
   This tests whether we do not fail __GFP_REPEAT costly allocations too early
   now.
* base kernel
$ sort base-hugepages.log | uniq -c
      1 66
     19 67
     20 Trying to allocate 74

* patched kernel
$ sort patched-hugepages.log | uniq -c
      1 66
     19 67
     20 Trying to allocate 74

This also doesn't look very bad but this particular test is quite timing
sensitive.

The above results do seem optimistic but more loads should be tested
obviously. I would really appreciate a feedback on the approach I have
chosen before I go into more tuning. Is this viable way to go?

[1] http://lkml.kernel.org/r/1446131835-3263-1-git-send-email-mhocko@kernel.org
[2] http://lkml.kernel.org/r/CA+55aFwapaED7JV6zm-NVkP-jKie+eQ1vDXWrKD=SkbshZSgmw@mail.gmail.com
[3] http://lkml.kernel.org/r/CA+55aFxwg=vS2nrXsQhAUzPQDGb8aQpZi0M7UUh21ftBo-z46Q@mail.gmail.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [RFC 1/3] mm, oom: refactor oom detection
  2015-11-18 13:03 [RFC 0/3] OOM detection rework v2 Michal Hocko
@ 2015-11-18 13:03 ` Michal Hocko
  2015-11-19 23:01   ` David Rientjes
  2015-11-18 13:03 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages Michal Hocko
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 22+ messages in thread
From: Michal Hocko @ 2015-11-18 13:03 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
	Michal Hocko

From: Michal Hocko <mhocko@suse.com>

__alloc_pages_slowpath has traditionally relied on the direct reclaim
and did_some_progress as an indicator that it makes sense to retry
allocation rather than declaring OOM. shrink_zones had to rely on
zone_reclaimable if shrink_zone didn't make any progress to prevent
from a premature OOM killer invocation - the LRU might be full of dirty
or writeback pages and direct reclaim cannot clean those up.

zone_reclaimable allows to rescan the reclaimable lists several
times and restart if a page is freed. This is really subtle behavior
and it might lead to a livelock when a single freed page keeps allocator
looping but the current task will not be able to allocate that single
page. OOM killer would be more appropriate than looping without any
progress for unbounded amount of time.

This patch changes OOM detection logic and pulls it out from shrink_zone
which is too low to be appropriate for any high level decisions such as OOM
which is per zonelist property. It is __alloc_pages_slowpath which knows
how many attempts have been done and what was the progress so far
therefore it is more appropriate to implement this logic.

The new heuristic tries to be more deterministic and easier to follow.
It builds on an assumption that retrying makes sense only if the
currently reclaimable memory + free pages would allow the current
allocation request to succeed (as per __zone_watermark_ok) at least for
one zone in the usable zonelist.

This alone wouldn't be sufficient, though, because the writeback might
get stuck and reclaimable pages might be pinned for a really long time
or even depend on the current allocation context. Therefore there is a
feedback mechanism implemented which reduces the reclaim target after
each reclaim round without any progress. This means that we should
eventually converge to only NR_FREE_PAGES as the target and fail on the
wmark check and proceed to OOM. The backoff is simple and linear with
1/16 of the reclaimable pages for each round without any progress. We
are optimistic and reset counter for successful reclaim rounds.

Costly high order pages mostly preserve their semantic and those without
__GFP_REPEAT fail right away while those which have the flag set will
back off after the amount of reclaimable pages reaches equivalent of the
requested order. The only difference is that if there was no progress
during the reclaim we rely on zone watermark check. This is more logical
thing to do than previous 1<<order attempts which were a result of
zone_reclaimable faking the progress.

Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 include/linux/swap.h |  1 +
 mm/page_alloc.c      | 70 ++++++++++++++++++++++++++++++++++++++++++++++------
 mm/vmscan.c          | 13 ++--------
 3 files changed, 66 insertions(+), 18 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 457181844b6e..738ae2206635 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -316,6 +316,7 @@ extern void lru_cache_add_active_or_unevictable(struct page *page,
 						struct vm_area_struct *vma);
 
 /* linux/mm/vmscan.c */
+extern unsigned long zone_reclaimable_pages(struct zone *zone);
 extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
 					gfp_t gfp_mask, nodemask_t *mask);
 extern int __isolate_lru_page(struct page *page, isolate_mode_t mode);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8034909faad2..020c005c5bc0 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2992,6 +2992,13 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
 	return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
 }
 
+/*
+ * Number of backoff steps for potentially reclaimable pages if the direct reclaim
+ * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
+ * reclaimable memory.
+ */
+#define MAX_STALL_BACKOFF 16
+
 static inline struct page *
 __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 						struct alloc_context *ac)
@@ -3004,6 +3011,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	enum migrate_mode migration_mode = MIGRATE_ASYNC;
 	bool deferred_compaction = false;
 	int contended_compaction = COMPACT_CONTENDED_NONE;
+	struct zone *zone;
+	struct zoneref *z;
+	int stall_backoff = 0;
 
 	/*
 	 * In the slowpath, we sanity check order to avoid ever trying to
@@ -3155,13 +3165,57 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	if (gfp_mask & __GFP_NORETRY)
 		goto noretry;
 
-	/* Keep reclaiming pages as long as there is reasonable progress */
+	/*
+	 * Do not retry high order allocations unless they are __GFP_REPEAT
+	 * and even then do not retry endlessly unless explicitly told so
+	 */
 	pages_reclaimed += did_some_progress;
-	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
-	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
-		/* Wait for some write requests to complete then retry */
-		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
-		goto retry;
+	if (order > PAGE_ALLOC_COSTLY_ORDER) {
+		if (!(gfp_mask & __GFP_NOFAIL) &&
+		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
+			goto noretry;
+
+		if (did_some_progress)
+			goto retry;
+	}
+
+	/*
+	 * Be optimistic and consider all pages on reclaimable LRUs as usable
+	 * but make sure we converge to OOM if we cannot make any progress after
+	 * multiple consecutive failed attempts.
+	 */
+	if (did_some_progress)
+		stall_backoff = 0;
+	else
+		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
+
+	/*
+	 * Keep reclaiming pages while there is a chance this will lead somewhere.
+	 * If none of the target zones can satisfy our allocation request even
+	 * if all reclaimable pages are considered then we are screwed and have
+	 * to go OOM.
+	 */
+	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
+		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
+		unsigned long reclaimable;
+		unsigned long target;
+
+		reclaimable = zone_reclaimable_pages(zone) +
+			      zone_page_state(zone, NR_ISOLATED_FILE) +
+			      zone_page_state(zone, NR_ISOLATED_ANON);
+		target = reclaimable;
+		target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
+		target += free;
+
+		/*
+		 * Would the allocation succeed if we reclaimed the whole target?
+		 */
+		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
+				ac->high_zoneidx, alloc_flags, target)) {
+			/* Wait for some write requests to complete then retry */
+			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
+			goto retry;
+		}
 	}
 
 	/* Reclaim has failed us, start killing things */
@@ -3170,8 +3224,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		goto got_pg;
 
 	/* Retry as long as the OOM killer is making progress */
-	if (did_some_progress)
+	if (did_some_progress) {
+		stall_backoff = 0;
 		goto retry;
+	}
 
 noretry:
 	/*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index a4507ecaefbf..9060a71e5a90 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -192,7 +192,7 @@ static bool sane_reclaim(struct scan_control *sc)
 }
 #endif
 
-static unsigned long zone_reclaimable_pages(struct zone *zone)
+unsigned long zone_reclaimable_pages(struct zone *zone)
 {
 	unsigned long nr;
 
@@ -2594,10 +2594,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 
 		if (shrink_zone(zone, sc, zone_idx(zone) == classzone_idx))
 			reclaimable = true;
-
-		if (global_reclaim(sc) &&
-		    !reclaimable && zone_reclaimable(zone))
-			reclaimable = true;
 	}
 
 	/*
@@ -2631,7 +2627,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 	int initial_priority = sc->priority;
 	unsigned long total_scanned = 0;
 	unsigned long writeback_threshold;
-	bool zones_reclaimable;
 retry:
 	delayacct_freepages_start();
 
@@ -2642,7 +2637,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup,
 				sc->priority);
 		sc->nr_scanned = 0;
-		zones_reclaimable = shrink_zones(zonelist, sc);
+		shrink_zones(zonelist, sc);
 
 		total_scanned += sc->nr_scanned;
 		if (sc->nr_reclaimed >= sc->nr_to_reclaim)
@@ -2689,10 +2684,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		goto retry;
 	}
 
-	/* Any of the zones still reclaimable?  Don't OOM. */
-	if (zones_reclaimable)
-		return 1;
-
 	return 0;
 }
 
-- 
2.6.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-11-18 13:03 [RFC 0/3] OOM detection rework v2 Michal Hocko
  2015-11-18 13:03 ` [RFC 1/3] mm, oom: refactor oom detection Michal Hocko
@ 2015-11-18 13:03 ` Michal Hocko
  2015-11-19 23:12   ` David Rientjes
  2015-11-18 13:04 ` [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations Michal Hocko
  2015-11-18 16:21 ` [RFC 0/3] OOM detection rework v2 Linus Torvalds
  3 siblings, 1 reply; 22+ messages in thread
From: Michal Hocko @ 2015-11-18 13:03 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
	Michal Hocko

From: Michal Hocko <mhocko@suse.com>

wait_iff_congested has been used to throttle allocator before it retried
another round of direct reclaim to allow the writeback to make some
progress and prevent reclaim from looping over dirty/writeback pages
without making any progress. We used to do congestion_wait before
0e093d99763e ("writeback: do not sleep on the congestion queue if
there are no congested BDIs or if significant congestion is not being
encountered in the current zone") but that led to undesirable stalls
and sleeping for the full timeout even when the BDI wasn't congested.
Hence wait_iff_congested was used instead. But it seems that even
wait_iff_congested doesn't work as expected. We might have a small file
LRU list with all pages dirty/writeback and yet the bdi is not congested
so this is just a cond_resched in the end and can end up triggering pre
mature OOM.

This patch replaces the unconditional wait_iff_congested by
congestion_wait which is executed only if we _know_ that the last round
of direct reclaim didn't make any progress and dirty+writeback pages are
more than a half of the reclaimable pages on the zone which might be
usable for our target allocation. This shouldn't reintroduce stalls
fixed by 0e093d99763e because congestion_wait is called only when we
are getting hopeless when sleeping is a better choice than OOM with many
pages under IO.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 mm/page_alloc.c | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 020c005c5bc0..e6271bc19e6a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3212,8 +3212,20 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		 */
 		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
 				ac->high_zoneidx, alloc_flags, target)) {
-			/* Wait for some write requests to complete then retry */
-			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
+			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
+				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
+
+			/*
+			 * If we didn't make any progress and have a lot of
+			 * dirty + writeback pages then we should wait for
+			 * an IO to complete to slow down the reclaim and
+			 * prevent from pre mature OOM
+			 */
+			if (!did_some_progress && 2*(writeback + dirty) > reclaimable)
+				congestion_wait(BLK_RW_ASYNC, HZ/10);
+			else
+				cond_resched();
+
 			goto retry;
 		}
 	}
-- 
2.6.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations
  2015-11-18 13:03 [RFC 0/3] OOM detection rework v2 Michal Hocko
  2015-11-18 13:03 ` [RFC 1/3] mm, oom: refactor oom detection Michal Hocko
  2015-11-18 13:03 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages Michal Hocko
@ 2015-11-18 13:04 ` Michal Hocko
  2015-11-19 23:17   ` David Rientjes
  2015-11-18 16:21 ` [RFC 0/3] OOM detection rework v2 Linus Torvalds
  3 siblings, 1 reply; 22+ messages in thread
From: Michal Hocko @ 2015-11-18 13:04 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
	Michal Hocko

From: Michal Hocko <mhocko@suse.com>

__alloc_pages_slowpath retries costly allocations until at least
order worth of pages were reclaimed or the watermark check for at least
one zone would succeed after all reclaiming all pages if the reclaim
hasn't made any progress.

The first condition was added by a41f24ea9fd6 ("page allocator: smarter
retry of costly-order allocations) and it assumed that lumpy reclaim
could have created a page of the sufficient order. Lumpy reclaim,
has been removed quite some time ago so the assumption doesn't hold
anymore. It would be more appropriate to check the compaction progress
instead but this patch simply removes the check and relies solely
on the watermark check.

To prevent from too many retries the stall_backoff is not reseted after
a reclaim which made progress because we cannot assume it helped high
order situation.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 mm/page_alloc.c | 20 ++++++++------------
 1 file changed, 8 insertions(+), 12 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e6271bc19e6a..999c8cdbe7b5 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3006,7 +3006,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
 	struct page *page = NULL;
 	int alloc_flags;
-	unsigned long pages_reclaimed = 0;
 	unsigned long did_some_progress;
 	enum migrate_mode migration_mode = MIGRATE_ASYNC;
 	bool deferred_compaction = false;
@@ -3167,24 +3166,21 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 
 	/*
 	 * Do not retry high order allocations unless they are __GFP_REPEAT
-	 * and even then do not retry endlessly unless explicitly told so
+	 * unless explicitly told so.
 	 */
-	pages_reclaimed += did_some_progress;
-	if (order > PAGE_ALLOC_COSTLY_ORDER) {
-		if (!(gfp_mask & __GFP_NOFAIL) &&
-		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
-			goto noretry;
-
-		if (did_some_progress)
-			goto retry;
-	}
+	if (order > PAGE_ALLOC_COSTLY_ORDER &&
+			!(gfp_mask & (__GFP_REPEAT|__GFP_NOFAIL)))
+		goto noretry;
 
 	/*
 	 * Be optimistic and consider all pages on reclaimable LRUs as usable
 	 * but make sure we converge to OOM if we cannot make any progress after
 	 * multiple consecutive failed attempts.
+	 * Costly __GFP_REPEAT allocations might have made a progress but this
+	 * doesn't mean their order will become available due to high fragmentation
+	 * so do not reset the backoff for them
 	 */
-	if (did_some_progress)
+	if (did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER)
 		stall_backoff = 0;
 	else
 		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
-- 
2.6.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [RFC 0/3] OOM detection rework v2
  2015-11-18 13:03 [RFC 0/3] OOM detection rework v2 Michal Hocko
                   ` (2 preceding siblings ...)
  2015-11-18 13:04 ` [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations Michal Hocko
@ 2015-11-18 16:21 ` Linus Torvalds
  3 siblings, 0 replies; 22+ messages in thread
From: Linus Torvalds @ 2015-11-18 16:21 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, Andrew Morton, Mel Gorman, Johannes Weiner,
	David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki

On Wed, Nov 18, 2015 at 5:03 AM, Michal Hocko <mhocko@kernel.org> wrote:
>
> The above results do seem optimistic but more loads should be tested
> obviously. I would really appreciate a feedback on the approach I have
> chosen before I go into more tuning. Is this viable way to go?

Tetsuo, does this latest version work for you too?

Andrew - I'm assuming this will all come through you at some point.

             Linus

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-11-18 13:03 ` [RFC 1/3] mm, oom: refactor oom detection Michal Hocko
@ 2015-11-19 23:01   ` David Rientjes
  2015-11-20  9:06     ` Michal Hocko
  0 siblings, 1 reply; 22+ messages in thread
From: David Rientjes @ 2015-11-19 23:01 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
	Michal Hocko

On Wed, 18 Nov 2015, Michal Hocko wrote:

> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 8034909faad2..020c005c5bc0 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2992,6 +2992,13 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
>  	return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
>  }
>  
> +/*
> + * Number of backoff steps for potentially reclaimable pages if the direct reclaim
> + * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
> + * reclaimable memory.
> + */
> +#define MAX_STALL_BACKOFF 16
> +
>  static inline struct page *
>  __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  						struct alloc_context *ac)
> @@ -3004,6 +3011,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	enum migrate_mode migration_mode = MIGRATE_ASYNC;
>  	bool deferred_compaction = false;
>  	int contended_compaction = COMPACT_CONTENDED_NONE;
> +	struct zone *zone;
> +	struct zoneref *z;
> +	int stall_backoff = 0;
>  
>  	/*
>  	 * In the slowpath, we sanity check order to avoid ever trying to
> @@ -3155,13 +3165,57 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	if (gfp_mask & __GFP_NORETRY)
>  		goto noretry;
>  
> -	/* Keep reclaiming pages as long as there is reasonable progress */
> +	/*
> +	 * Do not retry high order allocations unless they are __GFP_REPEAT
> +	 * and even then do not retry endlessly unless explicitly told so
> +	 */
>  	pages_reclaimed += did_some_progress;
> -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> -		/* Wait for some write requests to complete then retry */
> -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> -		goto retry;
> +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> +		if (!(gfp_mask & __GFP_NOFAIL) &&
> +		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
> +			goto noretry;
> +
> +		if (did_some_progress)
> +			goto retry;
> +	}

First of all, thanks very much for attacking this issue!

I'm concerned that we'll reach stall_backoff == MAX_STALL_BACKOFF too 
quickly if the wait_iff_congested() is removed.  While not immediately 
being available for reclaim, this has at least partially stalled in the 
past which may have resulted in external memory freeing.  I'm wondering if 
it would make sense to keep if nothing more than to avoid an immediate 
retry.

> +
> +	/*
> +	 * Be optimistic and consider all pages on reclaimable LRUs as usable
> +	 * but make sure we converge to OOM if we cannot make any progress after
> +	 * multiple consecutive failed attempts.
> +	 */
> +	if (did_some_progress)
> +		stall_backoff = 0;
> +	else
> +		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
> +
> +	/*
> +	 * Keep reclaiming pages while there is a chance this will lead somewhere.
> +	 * If none of the target zones can satisfy our allocation request even
> +	 * if all reclaimable pages are considered then we are screwed and have
> +	 * to go OOM.
> +	 */
> +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> +		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);

This is concerning, I would think that you would want to use 
zone_page_state_snapshot() at the very list for when 
stall_backoff == MAX_STALL_BACKOFF.

> +		unsigned long reclaimable;
> +		unsigned long target;
> +
> +		reclaimable = zone_reclaimable_pages(zone) +
> +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> +			      zone_page_state(zone, NR_ISOLATED_ANON);

Does NR_ISOLATED_ANON mean anything relevant here in swapless 
environments?

> +		target = reclaimable;
> +		target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
> +		target += free;
> +
> +		/*
> +		 * Would the allocation succeed if we reclaimed the whole target?
> +		 */
> +		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> +				ac->high_zoneidx, alloc_flags, target)) {
> +			/* Wait for some write requests to complete then retry */
> +			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> +			goto retry;
> +		}
>  	}
>  
>  	/* Reclaim has failed us, start killing things */
> @@ -3170,8 +3224,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  		goto got_pg;
>  
>  	/* Retry as long as the OOM killer is making progress */
> -	if (did_some_progress)
> +	if (did_some_progress) {
> +		stall_backoff = 0;
>  		goto retry;
> +	}
>  
>  noretry:
>  	/*
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index a4507ecaefbf..9060a71e5a90 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -192,7 +192,7 @@ static bool sane_reclaim(struct scan_control *sc)
>  }
>  #endif
>  
> -static unsigned long zone_reclaimable_pages(struct zone *zone)
> +unsigned long zone_reclaimable_pages(struct zone *zone)
>  {
>  	unsigned long nr;
>  
> @@ -2594,10 +2594,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
>  
>  		if (shrink_zone(zone, sc, zone_idx(zone) == classzone_idx))
>  			reclaimable = true;
> -
> -		if (global_reclaim(sc) &&
> -		    !reclaimable && zone_reclaimable(zone))
> -			reclaimable = true;
>  	}
>  
>  	/*

It's possible to just make shrink_zones() void and drop the reclaimable 
variable.

Otherwise looks good!

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-11-18 13:03 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages Michal Hocko
@ 2015-11-19 23:12   ` David Rientjes
  2015-11-20  9:15     ` Michal Hocko
  0 siblings, 1 reply; 22+ messages in thread
From: David Rientjes @ 2015-11-19 23:12 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
	Michal Hocko

On Wed, 18 Nov 2015, Michal Hocko wrote:

> From: Michal Hocko <mhocko@suse.com>
> 
> wait_iff_congested has been used to throttle allocator before it retried
> another round of direct reclaim to allow the writeback to make some
> progress and prevent reclaim from looping over dirty/writeback pages
> without making any progress. We used to do congestion_wait before
> 0e093d99763e ("writeback: do not sleep on the congestion queue if
> there are no congested BDIs or if significant congestion is not being
> encountered in the current zone") but that led to undesirable stalls
> and sleeping for the full timeout even when the BDI wasn't congested.
> Hence wait_iff_congested was used instead. But it seems that even
> wait_iff_congested doesn't work as expected. We might have a small file
> LRU list with all pages dirty/writeback and yet the bdi is not congested
> so this is just a cond_resched in the end and can end up triggering pre
> mature OOM.
> 
> This patch replaces the unconditional wait_iff_congested by
> congestion_wait which is executed only if we _know_ that the last round
> of direct reclaim didn't make any progress and dirty+writeback pages are
> more than a half of the reclaimable pages on the zone which might be
> usable for our target allocation. This shouldn't reintroduce stalls
> fixed by 0e093d99763e because congestion_wait is called only when we
> are getting hopeless when sleeping is a better choice than OOM with many
> pages under IO.
> 

Why HZ/10 instead of HZ/50?

> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
>  mm/page_alloc.c | 16 ++++++++++++++--
>  1 file changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 020c005c5bc0..e6271bc19e6a 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3212,8 +3212,20 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  		 */
>  		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
>  				ac->high_zoneidx, alloc_flags, target)) {
> -			/* Wait for some write requests to complete then retry */
> -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> +
> +			/*
> +			 * If we didn't make any progress and have a lot of
> +			 * dirty + writeback pages then we should wait for
> +			 * an IO to complete to slow down the reclaim and
> +			 * prevent from pre mature OOM
> +			 */
> +			if (!did_some_progress && 2*(writeback + dirty) > reclaimable)
> +				congestion_wait(BLK_RW_ASYNC, HZ/10);

The purpose of the heuristic seems logical, but I'm concerned about the 
threshold for determining when to wait and when to just resched and retry 
again.

This triggers for environments without swap when

2 * (NR_WRITEBACK + NR_DIRTY) > (NR_ACTIVE_FILE + NR_INACTIVE_FILE +
				 NR_ISOLATED_FILE + NR_ISOLATED_ANON)

 [ The use of NR_ISOLATED_ANON in swapless is asked about in patch 1. ]

How exactly was this chosen?  Why not when the two sides equal each other?

> +			else
> +				cond_resched();
> +
>  			goto retry;
>  		}
>  	}

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations
  2015-11-18 13:04 ` [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations Michal Hocko
@ 2015-11-19 23:17   ` David Rientjes
  2015-11-20  9:18     ` Michal Hocko
  0 siblings, 1 reply; 22+ messages in thread
From: David Rientjes @ 2015-11-19 23:17 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
	Michal Hocko

On Wed, 18 Nov 2015, Michal Hocko wrote:

> From: Michal Hocko <mhocko@suse.com>
> 
> __alloc_pages_slowpath retries costly allocations until at least
> order worth of pages were reclaimed or the watermark check for at least
> one zone would succeed after all reclaiming all pages if the reclaim
> hasn't made any progress.
> 
> The first condition was added by a41f24ea9fd6 ("page allocator: smarter
> retry of costly-order allocations) and it assumed that lumpy reclaim
> could have created a page of the sufficient order. Lumpy reclaim,
> has been removed quite some time ago so the assumption doesn't hold
> anymore. It would be more appropriate to check the compaction progress
> instead but this patch simply removes the check and relies solely
> on the watermark check.
> 
> To prevent from too many retries the stall_backoff is not reseted after
> a reclaim which made progress because we cannot assume it helped high
> order situation.
> 
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
>  mm/page_alloc.c | 20 ++++++++------------
>  1 file changed, 8 insertions(+), 12 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index e6271bc19e6a..999c8cdbe7b5 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3006,7 +3006,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
>  	struct page *page = NULL;
>  	int alloc_flags;
> -	unsigned long pages_reclaimed = 0;
>  	unsigned long did_some_progress;
>  	enum migrate_mode migration_mode = MIGRATE_ASYNC;
>  	bool deferred_compaction = false;
> @@ -3167,24 +3166,21 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  
>  	/*
>  	 * Do not retry high order allocations unless they are __GFP_REPEAT
> -	 * and even then do not retry endlessly unless explicitly told so
> +	 * unless explicitly told so.
>  	 */
> -	pages_reclaimed += did_some_progress;
> -	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> -		if (!(gfp_mask & __GFP_NOFAIL) &&
> -		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
> -			goto noretry;
> -
> -		if (did_some_progress)
> -			goto retry;
> -	}
> +	if (order > PAGE_ALLOC_COSTLY_ORDER &&
> +			!(gfp_mask & (__GFP_REPEAT|__GFP_NOFAIL)))
> +		goto noretry;

Who is allocating order > PAGE_ALLOC_COSTLY_ORDER with __GFP_REPEAT and 
would be affected by this change?

>  
>  	/*
>  	 * Be optimistic and consider all pages on reclaimable LRUs as usable
>  	 * but make sure we converge to OOM if we cannot make any progress after
>  	 * multiple consecutive failed attempts.
> +	 * Costly __GFP_REPEAT allocations might have made a progress but this
> +	 * doesn't mean their order will become available due to high fragmentation
> +	 * so do not reset the backoff for them
>  	 */
> -	if (did_some_progress)
> +	if (did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER)
>  		stall_backoff = 0;
>  	else
>  		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF); 

This makes sense if there are high-order users of __GFP_REPEAT since 
only using a number of pages reclaimed by itself isn't helpful.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-11-19 23:01   ` David Rientjes
@ 2015-11-20  9:06     ` Michal Hocko
  2015-11-20 23:27       ` David Rientjes
  0 siblings, 1 reply; 22+ messages in thread
From: Michal Hocko @ 2015-11-20  9:06 UTC (permalink / raw)
  To: David Rientjes
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki

On Thu 19-11-15 15:01:38, David Rientjes wrote:
> On Wed, 18 Nov 2015, Michal Hocko wrote:
[...]
> > @@ -3155,13 +3165,57 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >  	if (gfp_mask & __GFP_NORETRY)
> >  		goto noretry;
> >  
> > -	/* Keep reclaiming pages as long as there is reasonable progress */
> > +	/*
> > +	 * Do not retry high order allocations unless they are __GFP_REPEAT
> > +	 * and even then do not retry endlessly unless explicitly told so
> > +	 */
> >  	pages_reclaimed += did_some_progress;
> > -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> > -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> > -		/* Wait for some write requests to complete then retry */
> > -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> > -		goto retry;
> > +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> > +		if (!(gfp_mask & __GFP_NOFAIL) &&
> > +		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
> > +			goto noretry;
> > +
> > +		if (did_some_progress)
> > +			goto retry;
> > +	}
> 
> First of all, thanks very much for attacking this issue!
> 
> I'm concerned that we'll reach stall_backoff == MAX_STALL_BACKOFF too 
> quickly if the wait_iff_congested() is removed.  While not immediately 
> being available for reclaim, this has at least partially stalled in the 
> past which may have resulted in external memory freeing.  I'm wondering if 
> it would make sense to keep if nothing more than to avoid an immediate 
> retry.

My experiments show that wait_iff_congested slept only very rarely if at
all (even for loads with a heavy IO). There might be other loads where
it really hits, though. If you have any of those I would be more than
happy if you could share them or at least test them with these patches.

If you are concerned about removed wait_iff_congested for costly
__GFP_REPEAT allocations then the follow up patch changes that to use a
common sleep&retry logic.

> > +
> > +	/*
> > +	 * Be optimistic and consider all pages on reclaimable LRUs as usable
> > +	 * but make sure we converge to OOM if we cannot make any progress after
> > +	 * multiple consecutive failed attempts.
> > +	 */
> > +	if (did_some_progress)
> > +		stall_backoff = 0;
> > +	else
> > +		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
> > +
> > +	/*
> > +	 * Keep reclaiming pages while there is a chance this will lead somewhere.
> > +	 * If none of the target zones can satisfy our allocation request even
> > +	 * if all reclaimable pages are considered then we are screwed and have
> > +	 * to go OOM.
> > +	 */
> > +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> > +		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
> 
> This is concerning, I would think that you would want to use 
> zone_page_state_snapshot() at the very list for when 
> stall_backoff == MAX_STALL_BACKOFF.

OK, this is a fair point. In an extreme case where vmstat counters are
way outdated we might loop endlessly. I will just use _snapshot variant.
The overhead shouldn't be a concern as this is a slow path.

Other counters are using backoff so they do not need this special
treatment.

> > +		unsigned long reclaimable;
> > +		unsigned long target;
> > +
> > +		reclaimable = zone_reclaimable_pages(zone) +
> > +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> > +			      zone_page_state(zone, NR_ISOLATED_ANON);
> 
> Does NR_ISOLATED_ANON mean anything relevant here in swapless 
> environments?

It should be 0 so I didn't bother to check for swapless configuration.

[...]

> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index a4507ecaefbf..9060a71e5a90 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -192,7 +192,7 @@ static bool sane_reclaim(struct scan_control *sc)
> >  }
> >  #endif
> >  
> > -static unsigned long zone_reclaimable_pages(struct zone *zone)
> > +unsigned long zone_reclaimable_pages(struct zone *zone)
> >  {
> >  	unsigned long nr;
> >  
> > @@ -2594,10 +2594,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> >  
> >  		if (shrink_zone(zone, sc, zone_idx(zone) == classzone_idx))
> >  			reclaimable = true;
> > -
> > -		if (global_reclaim(sc) &&
> > -		    !reclaimable && zone_reclaimable(zone))
> > -			reclaimable = true;
> >  	}
> >  
> >  	/*
> 
> It's possible to just make shrink_zones() void and drop the reclaimable 
> variable.

True, will do that.
 
> Otherwise looks good!

Thanks for the review!

Here is what I will fold it to the original patch
---
commit b8687e8406f4ec1b194b259acaea115711d319cd
Author: Michal Hocko <mhocko@suse.com>
Date:   Fri Nov 20 10:04:22 2015 +0100

    fold me: mm, oom: refactor oom detection
    
    [rientjes@google.com: use zone_page_state_snapshot for NR_FREE_PAGES]
    [rientjes@google.com: shrink_zones doesn't need to return anything]

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 999c8cdbe7b5..54476e71b572 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3192,7 +3192,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	 * to go OOM.
 	 */
 	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
-		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
+		unsigned long free = zone_page_state_snapshot(zone, NR_FREE_PAGES);
 		unsigned long reclaimable;
 		unsigned long target;
 
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9060a71e5a90..784e2b28d2fb 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2511,10 +2511,8 @@ static inline bool compaction_ready(struct zone *zone, int order)
  *
  * If a zone is deemed to be full of pinned pages then just give it a light
  * scan then give up on it.
- *
- * Returns true if a zone was reclaimable.
  */
-static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
+static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 {
 	struct zoneref *z;
 	struct zone *zone;
@@ -2522,7 +2520,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 	unsigned long nr_soft_scanned;
 	gfp_t orig_mask;
 	enum zone_type requested_highidx = gfp_zone(sc->gfp_mask);
-	bool reclaimable = false;
 
 	/*
 	 * If the number of buffer_heads in the machine exceeds the maximum
@@ -2587,13 +2584,10 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 						&nr_soft_scanned);
 			sc->nr_reclaimed += nr_soft_reclaimed;
 			sc->nr_scanned += nr_soft_scanned;
-			if (nr_soft_reclaimed)
-				reclaimable = true;
 			/* need some check for avoid more shrink_zone() */
 		}
 
-		if (shrink_zone(zone, sc, zone_idx(zone) == classzone_idx))
-			reclaimable = true;
+		shrink_zone(zone, sc, zone_idx(zone));
 	}
 
 	/*
@@ -2601,8 +2595,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 	 * promoted it to __GFP_HIGHMEM.
 	 */
 	sc->gfp_mask = orig_mask;
-
-	return reclaimable;
 }
 
 /*
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-11-19 23:12   ` David Rientjes
@ 2015-11-20  9:15     ` Michal Hocko
  0 siblings, 0 replies; 22+ messages in thread
From: Michal Hocko @ 2015-11-20  9:15 UTC (permalink / raw)
  To: David Rientjes
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki

On Thu 19-11-15 15:12:39, David Rientjes wrote:
> On Wed, 18 Nov 2015, Michal Hocko wrote:
> 
> > From: Michal Hocko <mhocko@suse.com>
> > 
> > wait_iff_congested has been used to throttle allocator before it retried
> > another round of direct reclaim to allow the writeback to make some
> > progress and prevent reclaim from looping over dirty/writeback pages
> > without making any progress. We used to do congestion_wait before
> > 0e093d99763e ("writeback: do not sleep on the congestion queue if
> > there are no congested BDIs or if significant congestion is not being
> > encountered in the current zone") but that led to undesirable stalls
> > and sleeping for the full timeout even when the BDI wasn't congested.
> > Hence wait_iff_congested was used instead. But it seems that even
> > wait_iff_congested doesn't work as expected. We might have a small file
> > LRU list with all pages dirty/writeback and yet the bdi is not congested
> > so this is just a cond_resched in the end and can end up triggering pre
> > mature OOM.
> > 
> > This patch replaces the unconditional wait_iff_congested by
> > congestion_wait which is executed only if we _know_ that the last round
> > of direct reclaim didn't make any progress and dirty+writeback pages are
> > more than a half of the reclaimable pages on the zone which might be
> > usable for our target allocation. This shouldn't reintroduce stalls
> > fixed by 0e093d99763e because congestion_wait is called only when we
> > are getting hopeless when sleeping is a better choice than OOM with many
> > pages under IO.
> > 
> 
> Why HZ/10 instead of HZ/50?

My idea was to give the writeback more time. As we only wait when there
is a lot of dirty/writeback data it shouldn't stall pointlessly.

> > Signed-off-by: Michal Hocko <mhocko@suse.com>
> > ---
> >  mm/page_alloc.c | 16 ++++++++++++++--
> >  1 file changed, 14 insertions(+), 2 deletions(-)
> > 
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 020c005c5bc0..e6271bc19e6a 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -3212,8 +3212,20 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >  		 */
> >  		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> >  				ac->high_zoneidx, alloc_flags, target)) {
> > -			/* Wait for some write requests to complete then retry */
> > -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> > +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> > +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> > +
> > +			/*
> > +			 * If we didn't make any progress and have a lot of
> > +			 * dirty + writeback pages then we should wait for
> > +			 * an IO to complete to slow down the reclaim and
> > +			 * prevent from pre mature OOM
> > +			 */
> > +			if (!did_some_progress && 2*(writeback + dirty) > reclaimable)
> > +				congestion_wait(BLK_RW_ASYNC, HZ/10);
> 
> The purpose of the heuristic seems logical, but I'm concerned about the 
> threshold for determining when to wait and when to just resched and retry 
> again.
> 
> This triggers for environments without swap when
> 
> 2 * (NR_WRITEBACK + NR_DIRTY) > (NR_ACTIVE_FILE + NR_INACTIVE_FILE +
> 				 NR_ISOLATED_FILE + NR_ISOLATED_ANON)
> 
>  [ The use of NR_ISOLATED_ANON in swapless is asked about in patch 1. ]
> 
> How exactly was this chosen?  Why not when the two sides equal each other?

The idea was to stall when the to-be-flushed pages form at least half of
the reclaimable memory which sounds like an easy concept to start with.
This worked reasonably well in my OOM stress tests but I am opened to
suggestions. Ideally this should be a function of the writeback speed
and maybe we will get there one day but I would like to start with
something simple which works most of the time.

> 
> > +			else
> > +				cond_resched();
> > +
> >  			goto retry;
> >  		}
> >  	}

Thanks
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations
  2015-11-19 23:17   ` David Rientjes
@ 2015-11-20  9:18     ` Michal Hocko
  2015-11-20 23:33       ` David Rientjes
  0 siblings, 1 reply; 22+ messages in thread
From: Michal Hocko @ 2015-11-20  9:18 UTC (permalink / raw)
  To: David Rientjes
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki

On Thu 19-11-15 15:17:35, David Rientjes wrote:
> On Wed, 18 Nov 2015, Michal Hocko wrote:
[...]
> > @@ -3167,24 +3166,21 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >  
> >  	/*
> >  	 * Do not retry high order allocations unless they are __GFP_REPEAT
> > -	 * and even then do not retry endlessly unless explicitly told so
> > +	 * unless explicitly told so.
> >  	 */
> > -	pages_reclaimed += did_some_progress;
> > -	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> > -		if (!(gfp_mask & __GFP_NOFAIL) &&
> > -		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
> > -			goto noretry;
> > -
> > -		if (did_some_progress)
> > -			goto retry;
> > -	}
> > +	if (order > PAGE_ALLOC_COSTLY_ORDER &&
> > +			!(gfp_mask & (__GFP_REPEAT|__GFP_NOFAIL)))
> > +		goto noretry;
> 
> Who is allocating order > PAGE_ALLOC_COSTLY_ORDER with __GFP_REPEAT and 
> would be affected by this change?

E.g. hugetlb pages. I have tested this in my testing scenario 3.

> >  
> >  	/*
> >  	 * Be optimistic and consider all pages on reclaimable LRUs as usable
> >  	 * but make sure we converge to OOM if we cannot make any progress after
> >  	 * multiple consecutive failed attempts.
> > +	 * Costly __GFP_REPEAT allocations might have made a progress but this
> > +	 * doesn't mean their order will become available due to high fragmentation
> > +	 * so do not reset the backoff for them
> >  	 */
> > -	if (did_some_progress)
> > +	if (did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER)
> >  		stall_backoff = 0;
> >  	else
> >  		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF); 
> 
> This makes sense if there are high-order users of __GFP_REPEAT since 
> only using a number of pages reclaimed by itself isn't helpful.

Yes, that was my thinking

Thanks!

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-11-20  9:06     ` Michal Hocko
@ 2015-11-20 23:27       ` David Rientjes
  2015-11-23  9:41         ` Michal Hocko
  0 siblings, 1 reply; 22+ messages in thread
From: David Rientjes @ 2015-11-20 23:27 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki

On Fri, 20 Nov 2015, Michal Hocko wrote:

> > > +		unsigned long reclaimable;
> > > +		unsigned long target;
> > > +
> > > +		reclaimable = zone_reclaimable_pages(zone) +
> > > +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> > > +			      zone_page_state(zone, NR_ISOLATED_ANON);
> > 
> > Does NR_ISOLATED_ANON mean anything relevant here in swapless 
> > environments?
> 
> It should be 0 so I didn't bother to check for swapless configuration.
> 

I'm not sure I understand your point, memory compaction certainly 
increments NR_ISOLATED_ANON and that would be considered unreclaimable in 
a swapless environment, correct?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations
  2015-11-20  9:18     ` Michal Hocko
@ 2015-11-20 23:33       ` David Rientjes
  2015-11-23  9:46         ` Michal Hocko
  0 siblings, 1 reply; 22+ messages in thread
From: David Rientjes @ 2015-11-20 23:33 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki

On Fri, 20 Nov 2015, Michal Hocko wrote:

> > > @@ -3167,24 +3166,21 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> > >  
> > >  	/*
> > >  	 * Do not retry high order allocations unless they are __GFP_REPEAT
> > > -	 * and even then do not retry endlessly unless explicitly told so
> > > +	 * unless explicitly told so.
> > >  	 */
> > > -	pages_reclaimed += did_some_progress;
> > > -	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> > > -		if (!(gfp_mask & __GFP_NOFAIL) &&
> > > -		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
> > > -			goto noretry;
> > > -
> > > -		if (did_some_progress)
> > > -			goto retry;
> > > -	}
> > > +	if (order > PAGE_ALLOC_COSTLY_ORDER &&
> > > +			!(gfp_mask & (__GFP_REPEAT|__GFP_NOFAIL)))
> > > +		goto noretry;
> > 
> > Who is allocating order > PAGE_ALLOC_COSTLY_ORDER with __GFP_REPEAT and 
> > would be affected by this change?
> 
> E.g. hugetlb pages. I have tested this in my testing scenario 3.
> 

If that's the only high-order user of __GFP_REPEAT, we might want to 
consider dropping it.  I believe the hugetlb usecase would only be 
relevant in early init (when __GFP_REPEAT shouldn't logically help) and 
when returning surplus pages due to hugetlb overcommit.  Since hugetlb 
overcommit is best effort and we already know that the
pages_reclaimed >= (1<<order) check is ridiculous for order-9 pages, I 
think you could just drop hugetlb's usage of __GFP_REPEAT and nobody would 
notice.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-11-20 23:27       ` David Rientjes
@ 2015-11-23  9:41         ` Michal Hocko
  2015-11-23 18:24           ` Johannes Weiner
  0 siblings, 1 reply; 22+ messages in thread
From: Michal Hocko @ 2015-11-23  9:41 UTC (permalink / raw)
  To: David Rientjes
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki

On Fri 20-11-15 15:27:39, David Rientjes wrote:
> On Fri, 20 Nov 2015, Michal Hocko wrote:
> 
> > > > +		unsigned long reclaimable;
> > > > +		unsigned long target;
> > > > +
> > > > +		reclaimable = zone_reclaimable_pages(zone) +
> > > > +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> > > > +			      zone_page_state(zone, NR_ISOLATED_ANON);
> > > 
> > > Does NR_ISOLATED_ANON mean anything relevant here in swapless 
> > > environments?
> > 
> > It should be 0 so I didn't bother to check for swapless configuration.
> > 
> 
> I'm not sure I understand your point, memory compaction certainly 
> increments NR_ISOLATED_ANON and that would be considered unreclaimable in 
> a swapless environment, correct?

My bad. I have completely missed that compaction/migration is updating
the counter as well. I would expect that the number shouldn't too large
to matter but I guess it will be better to simply exclude it. I will
fold this to the first patch.

Thanks
---
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 54476e71b572..7d885d7fae86 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3197,8 +3197,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		unsigned long target;
 
 		reclaimable = zone_reclaimable_pages(zone) +
-			      zone_page_state(zone, NR_ISOLATED_FILE) +
-			      zone_page_state(zone, NR_ISOLATED_ANON);
+			      zone_page_state(zone, NR_ISOLATED_FILE);
+		if (get_nr_swap_pages() > 0)
+			reclaimable += zone_page_state(zone, NR_ISOLATED_ANON);
+
 		target = reclaimable;
 		target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
 		target += free;

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations
  2015-11-20 23:33       ` David Rientjes
@ 2015-11-23  9:46         ` Michal Hocko
  0 siblings, 0 replies; 22+ messages in thread
From: Michal Hocko @ 2015-11-23  9:46 UTC (permalink / raw)
  To: David Rientjes
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki

On Fri 20-11-15 15:33:17, David Rientjes wrote:
> On Fri, 20 Nov 2015, Michal Hocko wrote:
> 
> > > > @@ -3167,24 +3166,21 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> > > >  
> > > >  	/*
> > > >  	 * Do not retry high order allocations unless they are __GFP_REPEAT
> > > > -	 * and even then do not retry endlessly unless explicitly told so
> > > > +	 * unless explicitly told so.
> > > >  	 */
> > > > -	pages_reclaimed += did_some_progress;
> > > > -	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> > > > -		if (!(gfp_mask & __GFP_NOFAIL) &&
> > > > -		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
> > > > -			goto noretry;
> > > > -
> > > > -		if (did_some_progress)
> > > > -			goto retry;
> > > > -	}
> > > > +	if (order > PAGE_ALLOC_COSTLY_ORDER &&
> > > > +			!(gfp_mask & (__GFP_REPEAT|__GFP_NOFAIL)))
> > > > +		goto noretry;
> > > 
> > > Who is allocating order > PAGE_ALLOC_COSTLY_ORDER with __GFP_REPEAT and 
> > > would be affected by this change?
> > 
> > E.g. hugetlb pages. I have tested this in my testing scenario 3.
> > 
> 
> If that's the only high-order user of __GFP_REPEAT, we might want to 
> consider dropping it. 

There are many others. I have tried to clean this area up quite recently
http://lkml.kernel.org/r/1446740160-29094-1-git-send-email-mhocko%40kernel.org
and managed to drop half of the current usage of __GFP_REPEAT.

> I believe the hugetlb usecase would only be 
> relevant in early init (when __GFP_REPEAT shouldn't logically help) and 
> when returning surplus pages due to hugetlb overcommit.  Since hugetlb 
> overcommit is best effort and we already know that the
> pages_reclaimed >= (1<<order) check is ridiculous for order-9 pages, I 
> think you could just drop hugetlb's usage of __GFP_REPEAT and nobody would 
> notice.

Even if that was the case, which I am not sure right now, I believe this
is a separate topic. We should still support __GFP_REPEAT in some form.

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-11-23  9:41         ` Michal Hocko
@ 2015-11-23 18:24           ` Johannes Weiner
  2015-11-24 10:03             ` Michal Hocko
  0 siblings, 1 reply; 22+ messages in thread
From: Johannes Weiner @ 2015-11-23 18:24 UTC (permalink / raw)
  To: Michal Hocko
  Cc: David Rientjes, linux-mm, Andrew Morton, Linus Torvalds,
	Mel Gorman, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki

On Mon, Nov 23, 2015 at 10:41:06AM +0100, Michal Hocko wrote:
> @@ -3197,8 +3197,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  		unsigned long target;
>  
>  		reclaimable = zone_reclaimable_pages(zone) +
> -			      zone_page_state(zone, NR_ISOLATED_FILE) +
> -			      zone_page_state(zone, NR_ISOLATED_ANON);
> +			      zone_page_state(zone, NR_ISOLATED_FILE);
> +		if (get_nr_swap_pages() > 0)
> +			reclaimable += zone_page_state(zone, NR_ISOLATED_ANON);

Can you include the isolated counts in zone_reclaimable_pages()?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-11-23 18:24           ` Johannes Weiner
@ 2015-11-24 10:03             ` Michal Hocko
  0 siblings, 0 replies; 22+ messages in thread
From: Michal Hocko @ 2015-11-24 10:03 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: David Rientjes, linux-mm, Andrew Morton, Linus Torvalds,
	Mel Gorman, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki

On Mon 23-11-15 13:24:47, Johannes Weiner wrote:
> On Mon, Nov 23, 2015 at 10:41:06AM +0100, Michal Hocko wrote:
> > @@ -3197,8 +3197,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >  		unsigned long target;
> >  
> >  		reclaimable = zone_reclaimable_pages(zone) +
> > -			      zone_page_state(zone, NR_ISOLATED_FILE) +
> > -			      zone_page_state(zone, NR_ISOLATED_ANON);
> > +			      zone_page_state(zone, NR_ISOLATED_FILE);
> > +		if (get_nr_swap_pages() > 0)
> > +			reclaimable += zone_page_state(zone, NR_ISOLATED_ANON);
> 
> Can you include the isolated counts in zone_reclaimable_pages()?

OK, this makes sense. NR_ISOLATED_* should be a temporary condition
after which pages either get back to the LRU or they get migrated to a
different location thus freed.

I will spin this intot a separate patch.

Thanks!
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations
  2015-12-02  7:07   ` Hillf Danton
@ 2015-12-02  8:52     ` Michal Hocko
  0 siblings, 0 replies; 22+ messages in thread
From: Michal Hocko @ 2015-12-02  8:52 UTC (permalink / raw)
  To: Hillf Danton
  Cc: linux-mm, 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'David Rientjes', 'Tetsuo Handa',
	'KAMEZAWA Hiroyuki'

On Wed 02-12-15 15:07:26, Hillf Danton wrote:
> > From: Michal Hocko <mhocko@suse.com>
> > 
> > __alloc_pages_slowpath retries costly allocations until at least
> > order worth of pages were reclaimed or the watermark check for at least
> > one zone would succeed after all reclaiming all pages if the reclaim
> > hasn't made any progress.
> > 
> > The first condition was added by a41f24ea9fd6 ("page allocator: smarter
> > retry of costly-order allocations) and it assumed that lumpy reclaim
> > could have created a page of the sufficient order. Lumpy reclaim,
> > has been removed quite some time ago so the assumption doesn't hold
> > anymore. It would be more appropriate to check the compaction progress
> > instead but this patch simply removes the check and relies solely
> > on the watermark check.
> > 
> > To prevent from too many retries the stall_backoff is not reseted after
> > a reclaim which made progress because we cannot assume it helped high
> > order situation.
> > 
> > Signed-off-by: Michal Hocko <mhocko@suse.com>
> > ---
> >  mm/page_alloc.c | 20 ++++++++------------
> >  1 file changed, 8 insertions(+), 12 deletions(-)
> > 
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 168a675e9116..45de14cd62f4 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -2998,7 +2998,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >  	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
> >  	struct page *page = NULL;
> >  	int alloc_flags;
> > -	unsigned long pages_reclaimed = 0;
> >  	unsigned long did_some_progress;
> >  	enum migrate_mode migration_mode = MIGRATE_ASYNC;
> >  	bool deferred_compaction = false;
> > @@ -3167,24 +3166,21 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> > 
> >  	/*
> >  	 * Do not retry high order allocations unless they are __GFP_REPEAT
> > -	 * and even then do not retry endlessly unless explicitly told so
> > +	 * unless explicitly told so.
> 
> s/unless/or/

Fixed
 
> Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>

Thanks!

> 
> >  	 */
> > -	pages_reclaimed += did_some_progress;
> > -	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> > -		if (!(gfp_mask & __GFP_NOFAIL) &&
> > -		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
> > -			goto noretry;
> > -
> > -		if (did_some_progress)
> > -			goto retry;
> > -	}
> > +	if (order > PAGE_ALLOC_COSTLY_ORDER &&
> > +			!(gfp_mask & (__GFP_REPEAT|__GFP_NOFAIL)))
> > +		goto noretry;
> > 
> >  	/*
> >  	 * Be optimistic and consider all pages on reclaimable LRUs as usable
> >  	 * but make sure we converge to OOM if we cannot make any progress after
> >  	 * multiple consecutive failed attempts.
> > +	 * Costly __GFP_REPEAT allocations might have made a progress but this
> > +	 * doesn't mean their order will become available due to high fragmentation
> > +	 * so do not reset the backoff for them
> >  	 */
> > -	if (did_some_progress)
> > +	if (did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER)
> >  		stall_backoff = 0;
> >  	else
> >  		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
> > --
> > 2.6.2
> 

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations
  2015-12-01 12:56 ` [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations Michal Hocko
@ 2015-12-02  7:07   ` Hillf Danton
  2015-12-02  8:52     ` Michal Hocko
  0 siblings, 1 reply; 22+ messages in thread
From: Hillf Danton @ 2015-12-02  7:07 UTC (permalink / raw)
  To: 'Michal Hocko', linux-mm
  Cc: 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'David Rientjes', 'Tetsuo Handa',
	'KAMEZAWA Hiroyuki', 'Michal Hocko'

> From: Michal Hocko <mhocko@suse.com>
> 
> __alloc_pages_slowpath retries costly allocations until at least
> order worth of pages were reclaimed or the watermark check for at least
> one zone would succeed after all reclaiming all pages if the reclaim
> hasn't made any progress.
> 
> The first condition was added by a41f24ea9fd6 ("page allocator: smarter
> retry of costly-order allocations) and it assumed that lumpy reclaim
> could have created a page of the sufficient order. Lumpy reclaim,
> has been removed quite some time ago so the assumption doesn't hold
> anymore. It would be more appropriate to check the compaction progress
> instead but this patch simply removes the check and relies solely
> on the watermark check.
> 
> To prevent from too many retries the stall_backoff is not reseted after
> a reclaim which made progress because we cannot assume it helped high
> order situation.
> 
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
>  mm/page_alloc.c | 20 ++++++++------------
>  1 file changed, 8 insertions(+), 12 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 168a675e9116..45de14cd62f4 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2998,7 +2998,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
>  	struct page *page = NULL;
>  	int alloc_flags;
> -	unsigned long pages_reclaimed = 0;
>  	unsigned long did_some_progress;
>  	enum migrate_mode migration_mode = MIGRATE_ASYNC;
>  	bool deferred_compaction = false;
> @@ -3167,24 +3166,21 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> 
>  	/*
>  	 * Do not retry high order allocations unless they are __GFP_REPEAT
> -	 * and even then do not retry endlessly unless explicitly told so
> +	 * unless explicitly told so.

s/unless/or/

Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>

>  	 */
> -	pages_reclaimed += did_some_progress;
> -	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> -		if (!(gfp_mask & __GFP_NOFAIL) &&
> -		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
> -			goto noretry;
> -
> -		if (did_some_progress)
> -			goto retry;
> -	}
> +	if (order > PAGE_ALLOC_COSTLY_ORDER &&
> +			!(gfp_mask & (__GFP_REPEAT|__GFP_NOFAIL)))
> +		goto noretry;
> 
>  	/*
>  	 * Be optimistic and consider all pages on reclaimable LRUs as usable
>  	 * but make sure we converge to OOM if we cannot make any progress after
>  	 * multiple consecutive failed attempts.
> +	 * Costly __GFP_REPEAT allocations might have made a progress but this
> +	 * doesn't mean their order will become available due to high fragmentation
> +	 * so do not reset the backoff for them
>  	 */
> -	if (did_some_progress)
> +	if (did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER)
>  		stall_backoff = 0;
>  	else
>  		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
> --
> 2.6.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations
  2015-12-01 12:56 [RFC 0/3] OOM detection rework v3 Michal Hocko
@ 2015-12-01 12:56 ` Michal Hocko
  2015-12-02  7:07   ` Hillf Danton
  0 siblings, 1 reply; 22+ messages in thread
From: Michal Hocko @ 2015-12-01 12:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
	Michal Hocko

From: Michal Hocko <mhocko@suse.com>

__alloc_pages_slowpath retries costly allocations until at least
order worth of pages were reclaimed or the watermark check for at least
one zone would succeed after all reclaiming all pages if the reclaim
hasn't made any progress.

The first condition was added by a41f24ea9fd6 ("page allocator: smarter
retry of costly-order allocations) and it assumed that lumpy reclaim
could have created a page of the sufficient order. Lumpy reclaim,
has been removed quite some time ago so the assumption doesn't hold
anymore. It would be more appropriate to check the compaction progress
instead but this patch simply removes the check and relies solely
on the watermark check.

To prevent from too many retries the stall_backoff is not reseted after
a reclaim which made progress because we cannot assume it helped high
order situation.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 mm/page_alloc.c | 20 ++++++++------------
 1 file changed, 8 insertions(+), 12 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 168a675e9116..45de14cd62f4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2998,7 +2998,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
 	struct page *page = NULL;
 	int alloc_flags;
-	unsigned long pages_reclaimed = 0;
 	unsigned long did_some_progress;
 	enum migrate_mode migration_mode = MIGRATE_ASYNC;
 	bool deferred_compaction = false;
@@ -3167,24 +3166,21 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 
 	/*
 	 * Do not retry high order allocations unless they are __GFP_REPEAT
-	 * and even then do not retry endlessly unless explicitly told so
+	 * unless explicitly told so.
 	 */
-	pages_reclaimed += did_some_progress;
-	if (order > PAGE_ALLOC_COSTLY_ORDER) {
-		if (!(gfp_mask & __GFP_NOFAIL) &&
-		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
-			goto noretry;
-
-		if (did_some_progress)
-			goto retry;
-	}
+	if (order > PAGE_ALLOC_COSTLY_ORDER &&
+			!(gfp_mask & (__GFP_REPEAT|__GFP_NOFAIL)))
+		goto noretry;
 
 	/*
 	 * Be optimistic and consider all pages on reclaimable LRUs as usable
 	 * but make sure we converge to OOM if we cannot make any progress after
 	 * multiple consecutive failed attempts.
+	 * Costly __GFP_REPEAT allocations might have made a progress but this
+	 * doesn't mean their order will become available due to high fragmentation
+	 * so do not reset the backoff for them
 	 */
-	if (did_some_progress)
+	if (did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER)
 		stall_backoff = 0;
 	else
 		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
-- 
2.6.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations
  2015-10-29 15:17 RFC: OOM detection rework v1 mhocko
@ 2015-10-29 15:17   ` mhocko
  0 siblings, 0 replies; 22+ messages in thread
From: mhocko @ 2015-10-29 15:17 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML, Michal Hocko

From: Michal Hocko <mhocko@suse.com>

__alloc_pages_slowpath retries costly allocations until at least
order worth of pages were reclaimed or the watermark check for at least
one zone would succeed after all reclaiming all pages if the reclaim
hasn't made any progress.

The first condition was added by a41f24ea9fd6 ("page allocator: smarter
retry of costly-order allocations) and it assumed that lumpy reclaim
could have created a page of the sufficient order. Lumpy reclaim,
has been removed quite some time ago so the assumption doesn't hold
anymore. It would be more appropriate to check the compaction progress
instead but this patch simply removes the check and relies solely
on the watermark check.

To prevent from too many retries the stall_backoff is not reseted after
a reclaim which made progress because we cannot assume it helped high
order situation.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 mm/page_alloc.c | 21 +++++++--------------
 1 file changed, 7 insertions(+), 14 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0518ca6a9776..0dc1ca9b1219 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2986,7 +2986,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
 	struct page *page = NULL;
 	int alloc_flags;
-	unsigned long pages_reclaimed = 0;
 	unsigned long did_some_progress;
 	enum migrate_mode migration_mode = MIGRATE_ASYNC;
 	bool deferred_compaction = false;
@@ -3145,25 +3144,19 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	if (gfp_mask & __GFP_NORETRY)
 		goto noretry;
 
-	/*
-	 * Do not retry high order allocations unless they are __GFP_REPEAT
-	 * and even then do not retry endlessly.
-	 */
-	pages_reclaimed += did_some_progress;
-	if (order > PAGE_ALLOC_COSTLY_ORDER) {
-		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
-			goto noretry;
-
-		if (did_some_progress)
-			goto retry;
-	}
+	/* Do not retry high order allocations unless they are __GFP_REPEAT */
+	if (order > PAGE_ALLOC_COSTLY_ORDER && !(gfp_mask & __GFP_REPEAT))
+		goto noretry;
 
 	/*
 	 * Be optimistic and consider all pages on reclaimable LRUs as usable
 	 * but make sure we converge to OOM if we cannot make any progress after
 	 * multiple consecutive failed attempts.
+	 * Costly __GFP_REPEAT allocations might have made a progress but this
+	 * doesn't mean their order will become available due to high fragmentation
+	 * so do not reset the backoff for them
 	 */
-	if (did_some_progress)
+	if (did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER)
 		stall_backoff = 0;
 	else
 		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
-- 
2.6.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations
@ 2015-10-29 15:17   ` mhocko
  0 siblings, 0 replies; 22+ messages in thread
From: mhocko @ 2015-10-29 15:17 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML, Michal Hocko

From: Michal Hocko <mhocko@suse.com>

__alloc_pages_slowpath retries costly allocations until at least
order worth of pages were reclaimed or the watermark check for at least
one zone would succeed after all reclaiming all pages if the reclaim
hasn't made any progress.

The first condition was added by a41f24ea9fd6 ("page allocator: smarter
retry of costly-order allocations) and it assumed that lumpy reclaim
could have created a page of the sufficient order. Lumpy reclaim,
has been removed quite some time ago so the assumption doesn't hold
anymore. It would be more appropriate to check the compaction progress
instead but this patch simply removes the check and relies solely
on the watermark check.

To prevent from too many retries the stall_backoff is not reseted after
a reclaim which made progress because we cannot assume it helped high
order situation.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 mm/page_alloc.c | 21 +++++++--------------
 1 file changed, 7 insertions(+), 14 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0518ca6a9776..0dc1ca9b1219 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2986,7 +2986,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
 	struct page *page = NULL;
 	int alloc_flags;
-	unsigned long pages_reclaimed = 0;
 	unsigned long did_some_progress;
 	enum migrate_mode migration_mode = MIGRATE_ASYNC;
 	bool deferred_compaction = false;
@@ -3145,25 +3144,19 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	if (gfp_mask & __GFP_NORETRY)
 		goto noretry;
 
-	/*
-	 * Do not retry high order allocations unless they are __GFP_REPEAT
-	 * and even then do not retry endlessly.
-	 */
-	pages_reclaimed += did_some_progress;
-	if (order > PAGE_ALLOC_COSTLY_ORDER) {
-		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
-			goto noretry;
-
-		if (did_some_progress)
-			goto retry;
-	}
+	/* Do not retry high order allocations unless they are __GFP_REPEAT */
+	if (order > PAGE_ALLOC_COSTLY_ORDER && !(gfp_mask & __GFP_REPEAT))
+		goto noretry;
 
 	/*
 	 * Be optimistic and consider all pages on reclaimable LRUs as usable
 	 * but make sure we converge to OOM if we cannot make any progress after
 	 * multiple consecutive failed attempts.
+	 * Costly __GFP_REPEAT allocations might have made a progress but this
+	 * doesn't mean their order will become available due to high fragmentation
+	 * so do not reset the backoff for them
 	 */
-	if (did_some_progress)
+	if (did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER)
 		stall_backoff = 0;
 	else
 		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
-- 
2.6.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2015-12-02  8:52 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-18 13:03 [RFC 0/3] OOM detection rework v2 Michal Hocko
2015-11-18 13:03 ` [RFC 1/3] mm, oom: refactor oom detection Michal Hocko
2015-11-19 23:01   ` David Rientjes
2015-11-20  9:06     ` Michal Hocko
2015-11-20 23:27       ` David Rientjes
2015-11-23  9:41         ` Michal Hocko
2015-11-23 18:24           ` Johannes Weiner
2015-11-24 10:03             ` Michal Hocko
2015-11-18 13:03 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages Michal Hocko
2015-11-19 23:12   ` David Rientjes
2015-11-20  9:15     ` Michal Hocko
2015-11-18 13:04 ` [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations Michal Hocko
2015-11-19 23:17   ` David Rientjes
2015-11-20  9:18     ` Michal Hocko
2015-11-20 23:33       ` David Rientjes
2015-11-23  9:46         ` Michal Hocko
2015-11-18 16:21 ` [RFC 0/3] OOM detection rework v2 Linus Torvalds
  -- strict thread matches above, loose matches on Subject: below --
2015-12-01 12:56 [RFC 0/3] OOM detection rework v3 Michal Hocko
2015-12-01 12:56 ` [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations Michal Hocko
2015-12-02  7:07   ` Hillf Danton
2015-12-02  8:52     ` Michal Hocko
2015-10-29 15:17 RFC: OOM detection rework v1 mhocko
2015-10-29 15:17 ` [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations mhocko
2015-10-29 15:17   ` mhocko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.