All of lore.kernel.org
 help / color / mirror / Atom feed
* RFC: OOM detection rework v1
@ 2015-10-29 15:17 ` mhocko
  0 siblings, 0 replies; 50+ messages in thread
From: mhocko @ 2015-10-29 15:17 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML

Hi,
as pointed by Linus [1][2] relying on zone_reclaimable as a way to
communicate the reclaim progress is rater dubious. I tend to agree,
not only it is really obscure, it is not hard to imagine cases where a
single page freed in the loop keeps all the reclaimers looping without
getting any progress because their gfp_mask wouldn't allow to get that
page anyway (e.g. single GFP_ATOMIC alloc and free loop). This is rather
rare so it doesn't happen in the practice but the current logic which we
have is rather obscure and hard to follow a also non-deterministic.

This is an attempt to make the OOM detection more deterministic and
easier to follow because each reclaimer basically tracks its own
progress which is implemented at the page allocator layer rather spread
out between the allocator and the reclaim. The more on the implementation
is described in the first patch.

I have tested several different scenarios but it should be clear that
testing OOM killer is quite hard to be representative. There is usually
a tiny gap between almost OOM and full blown OOM which is often time
sensitive. Anyway, I have tested the following 3 scenarios and I would
appreciate if there are more to test.

Testing environment: a virtual machine with 2G of RAM and 2CPUs without
any swap to make the OOM more deterministic.

1) 2 writers (each doing dd with 4M blocks to an xfs partition with 1G size,
   removes the files and starts over again) running in parallel for 10s
   to build up a lot of dirty pages when 100 parallel mem_eaters (anon
   private populated mmap which waits until it gets signal) with 80M
   each.

   This causes an OOM flood of course and I have compared both patched
   and unpatched kernels. The test is considered finished after there
   are no OOM conditions detected. This should tell us whether there are
   any excessive kills or some of them premature:

* base kernel
$ grep "Killed process" base-oom-run.log | tail -n1
[  836.589319] Killed process 3035 (mem_eater) total-vm:85852kB, anon-rss:81996kB, file-rss:344kB
$ grep "invoked oom-killer" base-oom-run.log | wc -l
78
$ grep "DMA32.*all_unreclaimable? no" base-oom-run.log | wc -l
0

* patched kernel
$ grep "Killed process" patched-oom-run.log | tail -n1
[  843.281009] Killed process 2998 (mem_eater) total-vm:85852kB, anon-rss:82000kB, file-rss:4kB
$ grep "invoked oom-killer" patched-oom-run.log | wc -l
77
$ grep "DMA32.*all_unreclaimable? no" patched-oom-run.log | wc -l
0

So they have finished in a comparable time and killed the very similar number
of processes and there doesn't seem to be any case where the patched kernel
would have DMA32 zone considered reclaimable.

2) 2 writers again with 10s of run and then 10 mem_eaters to consume as much
   memory as possible without triggering the OOM killer. This required a lot
   of tuning but I've considered 3 consecutive runs without OOM as a success.

* base kernel
size=$(awk '/MemFree/{printf "%dK", ($2/10)-(14*1024)}' /proc/meminfo)

* patched kernel
size=$(awk '/MemFree/{printf "%dK", ($2/10)-(7500)}' /proc/meminfo)

So it seems that the patched kernel handled the low mem conditions better and
fired OOM killer later.

3) Costly high-order allocations with a limited amount of memory.
   Start 10 memeaters in parallel each with
   size=$(awk '/MemTotal/{printf "%d\n", $2/10}' /proc/meminfo)
   This will cause an OOM killer which will kill one of them which will free up
   200M and then try to use all the remaining space for hugetlb pages. See how
   many of them will pass kill everything, wait 2s and try again.
   This tests whether we do not fail __GFP_REPEAT costly allocations too early
   now.
* base kernel
$ sort base-hugepages.log | uniq -c
      1 66
     19 67
     20 Trying to allocate 74

* patched kernel
$ sort patched-hugepages.log | uniq -c
      1 66
     19 67
     20 Trying to allocate 74

This also doesn't look very bad but this particular test is quite timing
sensitive.

The above results do seem optimistic but more loads should be tested
obviously. I would really appreciate a feedback on the approach I have
chosen before I go into more tuning. Is this viable way to go?

[1] http://lkml.kernel.org/r/CA+55aFwapaED7JV6zm-NVkP-jKie+eQ1vDXWrKD=SkbshZSgmw@mail.gmail.com
[2] http://lkml.kernel.org/r/CA+55aFxwg=vS2nrXsQhAUzPQDGb8aQpZi0M7UUh21ftBo-z46Q@mail.gmail.com

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RFC: OOM detection rework v1
@ 2015-10-29 15:17 ` mhocko
  0 siblings, 0 replies; 50+ messages in thread
From: mhocko @ 2015-10-29 15:17 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML

Hi,
as pointed by Linus [1][2] relying on zone_reclaimable as a way to
communicate the reclaim progress is rater dubious. I tend to agree,
not only it is really obscure, it is not hard to imagine cases where a
single page freed in the loop keeps all the reclaimers looping without
getting any progress because their gfp_mask wouldn't allow to get that
page anyway (e.g. single GFP_ATOMIC alloc and free loop). This is rather
rare so it doesn't happen in the practice but the current logic which we
have is rather obscure and hard to follow a also non-deterministic.

This is an attempt to make the OOM detection more deterministic and
easier to follow because each reclaimer basically tracks its own
progress which is implemented at the page allocator layer rather spread
out between the allocator and the reclaim. The more on the implementation
is described in the first patch.

I have tested several different scenarios but it should be clear that
testing OOM killer is quite hard to be representative. There is usually
a tiny gap between almost OOM and full blown OOM which is often time
sensitive. Anyway, I have tested the following 3 scenarios and I would
appreciate if there are more to test.

Testing environment: a virtual machine with 2G of RAM and 2CPUs without
any swap to make the OOM more deterministic.

1) 2 writers (each doing dd with 4M blocks to an xfs partition with 1G size,
   removes the files and starts over again) running in parallel for 10s
   to build up a lot of dirty pages when 100 parallel mem_eaters (anon
   private populated mmap which waits until it gets signal) with 80M
   each.

   This causes an OOM flood of course and I have compared both patched
   and unpatched kernels. The test is considered finished after there
   are no OOM conditions detected. This should tell us whether there are
   any excessive kills or some of them premature:

* base kernel
$ grep "Killed process" base-oom-run.log | tail -n1
[  836.589319] Killed process 3035 (mem_eater) total-vm:85852kB, anon-rss:81996kB, file-rss:344kB
$ grep "invoked oom-killer" base-oom-run.log | wc -l
78
$ grep "DMA32.*all_unreclaimable? no" base-oom-run.log | wc -l
0

* patched kernel
$ grep "Killed process" patched-oom-run.log | tail -n1
[  843.281009] Killed process 2998 (mem_eater) total-vm:85852kB, anon-rss:82000kB, file-rss:4kB
$ grep "invoked oom-killer" patched-oom-run.log | wc -l
77
$ grep "DMA32.*all_unreclaimable? no" patched-oom-run.log | wc -l
0

So they have finished in a comparable time and killed the very similar number
of processes and there doesn't seem to be any case where the patched kernel
would have DMA32 zone considered reclaimable.

2) 2 writers again with 10s of run and then 10 mem_eaters to consume as much
   memory as possible without triggering the OOM killer. This required a lot
   of tuning but I've considered 3 consecutive runs without OOM as a success.

* base kernel
size=$(awk '/MemFree/{printf "%dK", ($2/10)-(14*1024)}' /proc/meminfo)

* patched kernel
size=$(awk '/MemFree/{printf "%dK", ($2/10)-(7500)}' /proc/meminfo)

So it seems that the patched kernel handled the low mem conditions better and
fired OOM killer later.

3) Costly high-order allocations with a limited amount of memory.
   Start 10 memeaters in parallel each with
   size=$(awk '/MemTotal/{printf "%d\n", $2/10}' /proc/meminfo)
   This will cause an OOM killer which will kill one of them which will free up
   200M and then try to use all the remaining space for hugetlb pages. See how
   many of them will pass kill everything, wait 2s and try again.
   This tests whether we do not fail __GFP_REPEAT costly allocations too early
   now.
* base kernel
$ sort base-hugepages.log | uniq -c
      1 66
     19 67
     20 Trying to allocate 74

* patched kernel
$ sort patched-hugepages.log | uniq -c
      1 66
     19 67
     20 Trying to allocate 74

This also doesn't look very bad but this particular test is quite timing
sensitive.

The above results do seem optimistic but more loads should be tested
obviously. I would really appreciate a feedback on the approach I have
chosen before I go into more tuning. Is this viable way to go?

[1] http://lkml.kernel.org/r/CA+55aFwapaED7JV6zm-NVkP-jKie+eQ1vDXWrKD=SkbshZSgmw@mail.gmail.com
[2] http://lkml.kernel.org/r/CA+55aFxwg=vS2nrXsQhAUzPQDGb8aQpZi0M7UUh21ftBo-z46Q@mail.gmail.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [RFC 1/3] mm, oom: refactor oom detection
  2015-10-29 15:17 ` mhocko
@ 2015-10-29 15:17   ` mhocko
  -1 siblings, 0 replies; 50+ messages in thread
From: mhocko @ 2015-10-29 15:17 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML, Michal Hocko

From: Michal Hocko <mhocko@suse.com>

__alloc_pages_slowpath has traditionally relied on the direct reclaim
and did_some_progress as an indicator that it makes sense to retry
allocation rather than declaring OOM. shrink_zones had to rely on
zone_reclaimable if shrink_zone didn't make any progress to prevent
from pre mature OOM killer invocation - the LRU might be full of dirty
or writeback pages and direct reclaim cannot clean those up.

zone_reclaimable will allow to rescan the reclaimable lists several
times and restart if a page is freed. This is really subtle behavior
and it might lead to a livelock when a single freed page keeps allocator
looping but the current task will not be able to allocate that single
page. OOM killer would be more appropriate than looping without any
progress for unbounded amount of time.

This patch changes OOM detection logic and pulls it out from shrink_zone
which is too low to be appropriate for any high level decisions such as OOM
which is per zonelist property. It is __alloc_pages_slowpath which knows
how many attempts have been done and what was the progress so far
therefore it is more appropriate to implement this logic.

The new heuristic tries to be more deterministic and easier to follow.
Retrying makes sense only if the currently reclaimable memory + free
pages would allow the current allocation request to succeed (as per
__zone_watermark_ok) at least for one zone in the usable zonelist.

This alone wouldn't be sufficient, though, because the writeback might
get stuck and reclaimable pages might be pinned for a really long time
or even depend on the current allocation context. Therefore there is a
feedback mechanism implemented which reduces the reclaim target after
each reclaim round without any progress. This means that we should
eventually converge to only NR_FREE_PAGES as the target and fail on the
wmark check and proceed to OOM. The backoff is simple and linear with
1/16 of the reclaimable pages for each round without any progress. We
are optimistic and reset counter for successful reclaim rounds.

Costly high order pages mostly preserve their semantic and those without
__GFP_REPEAT fail right away while those which have the flag set will
back off after the amount of reclaimable pages reaches equivalent of the
requested order. The only difference is that if there was no progress
during the reclaim we rely on zone watermark check. This is more logical
thing to do than previous 1<<order attempts which were a result of
zone_reclaimable faking the progress.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 include/linux/swap.h |  1 +
 mm/page_alloc.c      | 69 ++++++++++++++++++++++++++++++++++++++++++++++------
 mm/vmscan.c          | 10 +-------
 3 files changed, 64 insertions(+), 16 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 9c7c4b418498..8298e1dc20f9 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -317,6 +317,7 @@ extern void lru_cache_add_active_or_unevictable(struct page *page,
 						struct vm_area_struct *vma);
 
 /* linux/mm/vmscan.c */
+extern unsigned long zone_reclaimable_pages(struct zone *zone);
 extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
 					gfp_t gfp_mask, nodemask_t *mask);
 extern int __isolate_lru_page(struct page *page, isolate_mode_t mode);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c73913648357..9c0abb75ad53 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2972,6 +2972,13 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
 	return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
 }
 
+/*
+ * Number of backoff steps for potentially reclaimable pages if the direct reclaim
+ * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
+ * reclaimable memory.
+ */
+#define MAX_STALL_BACKOFF 16
+
 static inline struct page *
 __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 						struct alloc_context *ac)
@@ -2984,6 +2991,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	enum migrate_mode migration_mode = MIGRATE_ASYNC;
 	bool deferred_compaction = false;
 	int contended_compaction = COMPACT_CONTENDED_NONE;
+	struct zone *zone;
+	struct zoneref *z;
+	int stall_backoff = 0;
 
 	/*
 	 * In the slowpath, we sanity check order to avoid ever trying to
@@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	if (gfp_mask & __GFP_NORETRY)
 		goto noretry;
 
-	/* Keep reclaiming pages as long as there is reasonable progress */
+	/*
+	 * Do not retry high order allocations unless they are __GFP_REPEAT
+	 * and even then do not retry endlessly.
+	 */
 	pages_reclaimed += did_some_progress;
-	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
-	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
-		/* Wait for some write requests to complete then retry */
-		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
-		goto retry;
+	if (order > PAGE_ALLOC_COSTLY_ORDER) {
+		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
+			goto noretry;
+
+		if (did_some_progress)
+			goto retry;
+	}
+
+	/*
+	 * Be optimistic and consider all pages on reclaimable LRUs as usable
+	 * but make sure we converge to OOM if we cannot make any progress after
+	 * multiple consecutive failed attempts.
+	 */
+	if (did_some_progress)
+		stall_backoff = 0;
+	else
+		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
+
+	/*
+	 * Keep reclaiming pages while there is a chance this will lead somewhere.
+	 * If none of the target zones can satisfy our allocation request even
+	 * if all reclaimable pages are considered then we are screwed and have
+	 * to go OOM.
+	 */
+	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
+		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
+		unsigned long reclaimable;
+		unsigned long target;
+
+		reclaimable = zone_reclaimable_pages(zone) +
+			      zone_page_state(zone, NR_ISOLATED_FILE) +
+			      zone_page_state(zone, NR_ISOLATED_ANON);
+		target = reclaimable;
+		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
+		target += free;
+
+		/*
+		 * Would the allocation succeed if we reclaimed the whole target?
+		 */
+		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
+				ac->high_zoneidx, alloc_flags, target)) {
+			/* Wait for some write requests to complete then retry */
+			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
+			goto retry;
+		}
 	}
 
 	/* Reclaim has failed us, start killing things */
@@ -3150,8 +3203,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		goto got_pg;
 
 	/* Retry as long as the OOM killer is making progress */
-	if (did_some_progress)
+	if (did_some_progress) {
+		stall_backoff = 0;
 		goto retry;
+	}
 
 noretry:
 	/*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c88d74ad9304..bc14217acd47 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -193,7 +193,7 @@ static bool sane_reclaim(struct scan_control *sc)
 }
 #endif
 
-static unsigned long zone_reclaimable_pages(struct zone *zone)
+unsigned long zone_reclaimable_pages(struct zone *zone)
 {
 	unsigned long nr;
 
@@ -2639,10 +2639,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 
 		if (shrink_zone(zone, sc, zone_idx(zone) == classzone_idx))
 			reclaimable = true;
-
-		if (global_reclaim(sc) &&
-		    !reclaimable && zone_reclaimable(zone))
-			reclaimable = true;
 	}
 
 	/*
@@ -2734,10 +2730,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		goto retry;
 	}
 
-	/* Any of the zones still reclaimable?  Don't OOM. */
-	if (zones_reclaimable)
-		return 1;
-
 	return 0;
 }
 
-- 
2.6.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC 1/3] mm, oom: refactor oom detection
@ 2015-10-29 15:17   ` mhocko
  0 siblings, 0 replies; 50+ messages in thread
From: mhocko @ 2015-10-29 15:17 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML, Michal Hocko

From: Michal Hocko <mhocko@suse.com>

__alloc_pages_slowpath has traditionally relied on the direct reclaim
and did_some_progress as an indicator that it makes sense to retry
allocation rather than declaring OOM. shrink_zones had to rely on
zone_reclaimable if shrink_zone didn't make any progress to prevent
from pre mature OOM killer invocation - the LRU might be full of dirty
or writeback pages and direct reclaim cannot clean those up.

zone_reclaimable will allow to rescan the reclaimable lists several
times and restart if a page is freed. This is really subtle behavior
and it might lead to a livelock when a single freed page keeps allocator
looping but the current task will not be able to allocate that single
page. OOM killer would be more appropriate than looping without any
progress for unbounded amount of time.

This patch changes OOM detection logic and pulls it out from shrink_zone
which is too low to be appropriate for any high level decisions such as OOM
which is per zonelist property. It is __alloc_pages_slowpath which knows
how many attempts have been done and what was the progress so far
therefore it is more appropriate to implement this logic.

The new heuristic tries to be more deterministic and easier to follow.
Retrying makes sense only if the currently reclaimable memory + free
pages would allow the current allocation request to succeed (as per
__zone_watermark_ok) at least for one zone in the usable zonelist.

This alone wouldn't be sufficient, though, because the writeback might
get stuck and reclaimable pages might be pinned for a really long time
or even depend on the current allocation context. Therefore there is a
feedback mechanism implemented which reduces the reclaim target after
each reclaim round without any progress. This means that we should
eventually converge to only NR_FREE_PAGES as the target and fail on the
wmark check and proceed to OOM. The backoff is simple and linear with
1/16 of the reclaimable pages for each round without any progress. We
are optimistic and reset counter for successful reclaim rounds.

Costly high order pages mostly preserve their semantic and those without
__GFP_REPEAT fail right away while those which have the flag set will
back off after the amount of reclaimable pages reaches equivalent of the
requested order. The only difference is that if there was no progress
during the reclaim we rely on zone watermark check. This is more logical
thing to do than previous 1<<order attempts which were a result of
zone_reclaimable faking the progress.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 include/linux/swap.h |  1 +
 mm/page_alloc.c      | 69 ++++++++++++++++++++++++++++++++++++++++++++++------
 mm/vmscan.c          | 10 +-------
 3 files changed, 64 insertions(+), 16 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 9c7c4b418498..8298e1dc20f9 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -317,6 +317,7 @@ extern void lru_cache_add_active_or_unevictable(struct page *page,
 						struct vm_area_struct *vma);
 
 /* linux/mm/vmscan.c */
+extern unsigned long zone_reclaimable_pages(struct zone *zone);
 extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
 					gfp_t gfp_mask, nodemask_t *mask);
 extern int __isolate_lru_page(struct page *page, isolate_mode_t mode);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c73913648357..9c0abb75ad53 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2972,6 +2972,13 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
 	return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
 }
 
+/*
+ * Number of backoff steps for potentially reclaimable pages if the direct reclaim
+ * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
+ * reclaimable memory.
+ */
+#define MAX_STALL_BACKOFF 16
+
 static inline struct page *
 __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 						struct alloc_context *ac)
@@ -2984,6 +2991,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	enum migrate_mode migration_mode = MIGRATE_ASYNC;
 	bool deferred_compaction = false;
 	int contended_compaction = COMPACT_CONTENDED_NONE;
+	struct zone *zone;
+	struct zoneref *z;
+	int stall_backoff = 0;
 
 	/*
 	 * In the slowpath, we sanity check order to avoid ever trying to
@@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	if (gfp_mask & __GFP_NORETRY)
 		goto noretry;
 
-	/* Keep reclaiming pages as long as there is reasonable progress */
+	/*
+	 * Do not retry high order allocations unless they are __GFP_REPEAT
+	 * and even then do not retry endlessly.
+	 */
 	pages_reclaimed += did_some_progress;
-	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
-	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
-		/* Wait for some write requests to complete then retry */
-		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
-		goto retry;
+	if (order > PAGE_ALLOC_COSTLY_ORDER) {
+		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
+			goto noretry;
+
+		if (did_some_progress)
+			goto retry;
+	}
+
+	/*
+	 * Be optimistic and consider all pages on reclaimable LRUs as usable
+	 * but make sure we converge to OOM if we cannot make any progress after
+	 * multiple consecutive failed attempts.
+	 */
+	if (did_some_progress)
+		stall_backoff = 0;
+	else
+		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
+
+	/*
+	 * Keep reclaiming pages while there is a chance this will lead somewhere.
+	 * If none of the target zones can satisfy our allocation request even
+	 * if all reclaimable pages are considered then we are screwed and have
+	 * to go OOM.
+	 */
+	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
+		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
+		unsigned long reclaimable;
+		unsigned long target;
+
+		reclaimable = zone_reclaimable_pages(zone) +
+			      zone_page_state(zone, NR_ISOLATED_FILE) +
+			      zone_page_state(zone, NR_ISOLATED_ANON);
+		target = reclaimable;
+		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
+		target += free;
+
+		/*
+		 * Would the allocation succeed if we reclaimed the whole target?
+		 */
+		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
+				ac->high_zoneidx, alloc_flags, target)) {
+			/* Wait for some write requests to complete then retry */
+			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
+			goto retry;
+		}
 	}
 
 	/* Reclaim has failed us, start killing things */
@@ -3150,8 +3203,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		goto got_pg;
 
 	/* Retry as long as the OOM killer is making progress */
-	if (did_some_progress)
+	if (did_some_progress) {
+		stall_backoff = 0;
 		goto retry;
+	}
 
 noretry:
 	/*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c88d74ad9304..bc14217acd47 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -193,7 +193,7 @@ static bool sane_reclaim(struct scan_control *sc)
 }
 #endif
 
-static unsigned long zone_reclaimable_pages(struct zone *zone)
+unsigned long zone_reclaimable_pages(struct zone *zone)
 {
 	unsigned long nr;
 
@@ -2639,10 +2639,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 
 		if (shrink_zone(zone, sc, zone_idx(zone) == classzone_idx))
 			reclaimable = true;
-
-		if (global_reclaim(sc) &&
-		    !reclaimable && zone_reclaimable(zone))
-			reclaimable = true;
 	}
 
 	/*
@@ -2734,10 +2730,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		goto retry;
 	}
 
-	/* Any of the zones still reclaimable?  Don't OOM. */
-	if (zones_reclaimable)
-		return 1;
-
 	return 0;
 }
 
-- 
2.6.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-10-29 15:17 ` mhocko
@ 2015-10-29 15:17   ` mhocko
  -1 siblings, 0 replies; 50+ messages in thread
From: mhocko @ 2015-10-29 15:17 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML, Michal Hocko

From: Michal Hocko <mhocko@suse.com>

wait_iff_congested has been used to throttle allocator before it retried
another round of direct reclaim to allow the writeback to make some
progress and prevent reclaim from looping over dirty/writeback pages
without making any progress. We used to do congestion_wait before
0e093d99763e ("writeback: do not sleep on the congestion queue if
there are no congested BDIs or if significant congestion is not being
encountered in the current zone") but that led to undesirable stalls
and sleeping for the full timeout even when the BDI wasn't congested.
Hence wait_iff_congested was used instead. But it seems that even
wait_iff_congested doesn't work as expected. We might have a small file
LRU list with all pages dirty/writeback and yet the bdi is not congested
so this is just a cond_resched in the end and can end up triggering pre
mature OOM.

This patch replaces the unconditional wait_iff_congested by
congestion_wait which is executed only if we _know_ that the last round
of direct reclaim didn't make any progress and dirty+writeback pages are
more than a half of the reclaimable pages on the zone which might be
usable for our target allocation. This shouldn't reintroduce stalls
fixed by 0e093d99763e because congestion_wait is called only when we
are getting hopeless when sleeping is a better choice than OOM with many
pages under IO.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 mm/page_alloc.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9c0abb75ad53..0518ca6a9776 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		 */
 		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
 				ac->high_zoneidx, alloc_flags, target)) {
-			/* Wait for some write requests to complete then retry */
-			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
+			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
+				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
+
+			if (did_some_progress)
+				goto retry;
+
+			/*
+			 * If we didn't make any progress and have a lot of
+			 * dirty + writeback pages then we should wait for
+			 * an IO to complete to slow down the reclaim and
+			 * prevent from pre mature OOM
+			 */
+			if (2*(writeback + dirty) > reclaimable)
+				congestion_wait(BLK_RW_ASYNC, HZ/10);
+			else
+				cond_resched();
+
 			goto retry;
 		}
 	}
-- 
2.6.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
@ 2015-10-29 15:17   ` mhocko
  0 siblings, 0 replies; 50+ messages in thread
From: mhocko @ 2015-10-29 15:17 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML, Michal Hocko

From: Michal Hocko <mhocko@suse.com>

wait_iff_congested has been used to throttle allocator before it retried
another round of direct reclaim to allow the writeback to make some
progress and prevent reclaim from looping over dirty/writeback pages
without making any progress. We used to do congestion_wait before
0e093d99763e ("writeback: do not sleep on the congestion queue if
there are no congested BDIs or if significant congestion is not being
encountered in the current zone") but that led to undesirable stalls
and sleeping for the full timeout even when the BDI wasn't congested.
Hence wait_iff_congested was used instead. But it seems that even
wait_iff_congested doesn't work as expected. We might have a small file
LRU list with all pages dirty/writeback and yet the bdi is not congested
so this is just a cond_resched in the end and can end up triggering pre
mature OOM.

This patch replaces the unconditional wait_iff_congested by
congestion_wait which is executed only if we _know_ that the last round
of direct reclaim didn't make any progress and dirty+writeback pages are
more than a half of the reclaimable pages on the zone which might be
usable for our target allocation. This shouldn't reintroduce stalls
fixed by 0e093d99763e because congestion_wait is called only when we
are getting hopeless when sleeping is a better choice than OOM with many
pages under IO.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 mm/page_alloc.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9c0abb75ad53..0518ca6a9776 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		 */
 		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
 				ac->high_zoneidx, alloc_flags, target)) {
-			/* Wait for some write requests to complete then retry */
-			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
+			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
+				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
+
+			if (did_some_progress)
+				goto retry;
+
+			/*
+			 * If we didn't make any progress and have a lot of
+			 * dirty + writeback pages then we should wait for
+			 * an IO to complete to slow down the reclaim and
+			 * prevent from pre mature OOM
+			 */
+			if (2*(writeback + dirty) > reclaimable)
+				congestion_wait(BLK_RW_ASYNC, HZ/10);
+			else
+				cond_resched();
+
 			goto retry;
 		}
 	}
-- 
2.6.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations
  2015-10-29 15:17 ` mhocko
@ 2015-10-29 15:17   ` mhocko
  -1 siblings, 0 replies; 50+ messages in thread
From: mhocko @ 2015-10-29 15:17 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML, Michal Hocko

From: Michal Hocko <mhocko@suse.com>

__alloc_pages_slowpath retries costly allocations until at least
order worth of pages were reclaimed or the watermark check for at least
one zone would succeed after all reclaiming all pages if the reclaim
hasn't made any progress.

The first condition was added by a41f24ea9fd6 ("page allocator: smarter
retry of costly-order allocations) and it assumed that lumpy reclaim
could have created a page of the sufficient order. Lumpy reclaim,
has been removed quite some time ago so the assumption doesn't hold
anymore. It would be more appropriate to check the compaction progress
instead but this patch simply removes the check and relies solely
on the watermark check.

To prevent from too many retries the stall_backoff is not reseted after
a reclaim which made progress because we cannot assume it helped high
order situation.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 mm/page_alloc.c | 21 +++++++--------------
 1 file changed, 7 insertions(+), 14 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0518ca6a9776..0dc1ca9b1219 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2986,7 +2986,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
 	struct page *page = NULL;
 	int alloc_flags;
-	unsigned long pages_reclaimed = 0;
 	unsigned long did_some_progress;
 	enum migrate_mode migration_mode = MIGRATE_ASYNC;
 	bool deferred_compaction = false;
@@ -3145,25 +3144,19 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	if (gfp_mask & __GFP_NORETRY)
 		goto noretry;
 
-	/*
-	 * Do not retry high order allocations unless they are __GFP_REPEAT
-	 * and even then do not retry endlessly.
-	 */
-	pages_reclaimed += did_some_progress;
-	if (order > PAGE_ALLOC_COSTLY_ORDER) {
-		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
-			goto noretry;
-
-		if (did_some_progress)
-			goto retry;
-	}
+	/* Do not retry high order allocations unless they are __GFP_REPEAT */
+	if (order > PAGE_ALLOC_COSTLY_ORDER && !(gfp_mask & __GFP_REPEAT))
+		goto noretry;
 
 	/*
 	 * Be optimistic and consider all pages on reclaimable LRUs as usable
 	 * but make sure we converge to OOM if we cannot make any progress after
 	 * multiple consecutive failed attempts.
+	 * Costly __GFP_REPEAT allocations might have made a progress but this
+	 * doesn't mean their order will become available due to high fragmentation
+	 * so do not reset the backoff for them
 	 */
-	if (did_some_progress)
+	if (did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER)
 		stall_backoff = 0;
 	else
 		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
-- 
2.6.1


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations
@ 2015-10-29 15:17   ` mhocko
  0 siblings, 0 replies; 50+ messages in thread
From: mhocko @ 2015-10-29 15:17 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML, Michal Hocko

From: Michal Hocko <mhocko@suse.com>

__alloc_pages_slowpath retries costly allocations until at least
order worth of pages were reclaimed or the watermark check for at least
one zone would succeed after all reclaiming all pages if the reclaim
hasn't made any progress.

The first condition was added by a41f24ea9fd6 ("page allocator: smarter
retry of costly-order allocations) and it assumed that lumpy reclaim
could have created a page of the sufficient order. Lumpy reclaim,
has been removed quite some time ago so the assumption doesn't hold
anymore. It would be more appropriate to check the compaction progress
instead but this patch simply removes the check and relies solely
on the watermark check.

To prevent from too many retries the stall_backoff is not reseted after
a reclaim which made progress because we cannot assume it helped high
order situation.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 mm/page_alloc.c | 21 +++++++--------------
 1 file changed, 7 insertions(+), 14 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0518ca6a9776..0dc1ca9b1219 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2986,7 +2986,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
 	struct page *page = NULL;
 	int alloc_flags;
-	unsigned long pages_reclaimed = 0;
 	unsigned long did_some_progress;
 	enum migrate_mode migration_mode = MIGRATE_ASYNC;
 	bool deferred_compaction = false;
@@ -3145,25 +3144,19 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	if (gfp_mask & __GFP_NORETRY)
 		goto noretry;
 
-	/*
-	 * Do not retry high order allocations unless they are __GFP_REPEAT
-	 * and even then do not retry endlessly.
-	 */
-	pages_reclaimed += did_some_progress;
-	if (order > PAGE_ALLOC_COSTLY_ORDER) {
-		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
-			goto noretry;
-
-		if (did_some_progress)
-			goto retry;
-	}
+	/* Do not retry high order allocations unless they are __GFP_REPEAT */
+	if (order > PAGE_ALLOC_COSTLY_ORDER && !(gfp_mask & __GFP_REPEAT))
+		goto noretry;
 
 	/*
 	 * Be optimistic and consider all pages on reclaimable LRUs as usable
 	 * but make sure we converge to OOM if we cannot make any progress after
 	 * multiple consecutive failed attempts.
+	 * Costly __GFP_REPEAT allocations might have made a progress but this
+	 * doesn't mean their order will become available due to high fragmentation
+	 * so do not reset the backoff for them
 	 */
-	if (did_some_progress)
+	if (did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER)
 		stall_backoff = 0;
 	else
 		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
-- 
2.6.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-29 15:17   ` mhocko
@ 2015-10-30  4:10     ` Hillf Danton
  -1 siblings, 0 replies; 50+ messages in thread
From: Hillf Danton @ 2015-10-30  4:10 UTC (permalink / raw)
  To: mhocko, linux-mm
  Cc: 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML', 'Michal Hocko'

> +/*
> + * Number of backoff steps for potentially reclaimable pages if the direct reclaim
> + * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
> + * reclaimable memory.
> + */
> +#define MAX_STALL_BACKOFF 16
> +
>  static inline struct page *
>  __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  						struct alloc_context *ac)
> @@ -2984,6 +2991,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	enum migrate_mode migration_mode = MIGRATE_ASYNC;
>  	bool deferred_compaction = false;
>  	int contended_compaction = COMPACT_CONTENDED_NONE;
> +	struct zone *zone;
> +	struct zoneref *z;
> +	int stall_backoff = 0;
> 
>  	/*
>  	 * In the slowpath, we sanity check order to avoid ever trying to
> @@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	if (gfp_mask & __GFP_NORETRY)
>  		goto noretry;
> 
> -	/* Keep reclaiming pages as long as there is reasonable progress */
> +	/*
> +	 * Do not retry high order allocations unless they are __GFP_REPEAT
> +	 * and even then do not retry endlessly.
> +	 */
>  	pages_reclaimed += did_some_progress;
> -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> -		/* Wait for some write requests to complete then retry */
> -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> -		goto retry;
> +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> +		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
> +			goto noretry;
> +
> +		if (did_some_progress)
> +			goto retry;
> +	}
> +
> +	/*
> +	 * Be optimistic and consider all pages on reclaimable LRUs as usable
> +	 * but make sure we converge to OOM if we cannot make any progress after
> +	 * multiple consecutive failed attempts.
> +	 */
> +	if (did_some_progress)
> +		stall_backoff = 0;
> +	else
> +		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
> +
> +	/*
> +	 * Keep reclaiming pages while there is a chance this will lead somewhere.
> +	 * If none of the target zones can satisfy our allocation request even
> +	 * if all reclaimable pages are considered then we are screwed and have
> +	 * to go OOM.
> +	 */
> +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> +		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
> +		unsigned long reclaimable;
> +		unsigned long target;
> +
> +		reclaimable = zone_reclaimable_pages(zone) +
> +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> +			      zone_page_state(zone, NR_ISOLATED_ANON);
> +		target = reclaimable;
> +		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);

		target = reclaimable - stall_backoff * (1 + target/MAX_STALL_BACKOFF);
		             = reclaimable - stall_backoff - stall_backoff  * (target/MAX_STALL_BACKOFF);

then the first stall_backoff looks unreasonable.
I guess you mean
		target	= reclaimable - target * (stall_backoff/MAX_STALL_BACKOFF);
			= reclaimable - stall_back * (target/MAX_STALL_BACKOFF);

> +		target += free;
> +
> +		/*
> +		 * Would the allocation succeed if we reclaimed the whole target?
> +		 */
> +		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> +				ac->high_zoneidx, alloc_flags, target)) {
> +			/* Wait for some write requests to complete then retry */
> +			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> +			goto retry;
> +		}
>  	}
> 
[...]
/*
> @@ -2734,10 +2730,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
>  		goto retry;
>  	}
> 
> -	/* Any of the zones still reclaimable?  Don't OOM. */
> -	if (zones_reclaimable)
> -		return 1;
> -

Looks cleanup of zones_reclaimable left.
>  	return 0;
>  }
> 
> --
> 2.6.1


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
@ 2015-10-30  4:10     ` Hillf Danton
  0 siblings, 0 replies; 50+ messages in thread
From: Hillf Danton @ 2015-10-30  4:10 UTC (permalink / raw)
  To: mhocko, linux-mm
  Cc: 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML', 'Michal Hocko'

> +/*
> + * Number of backoff steps for potentially reclaimable pages if the direct reclaim
> + * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
> + * reclaimable memory.
> + */
> +#define MAX_STALL_BACKOFF 16
> +
>  static inline struct page *
>  __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  						struct alloc_context *ac)
> @@ -2984,6 +2991,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	enum migrate_mode migration_mode = MIGRATE_ASYNC;
>  	bool deferred_compaction = false;
>  	int contended_compaction = COMPACT_CONTENDED_NONE;
> +	struct zone *zone;
> +	struct zoneref *z;
> +	int stall_backoff = 0;
> 
>  	/*
>  	 * In the slowpath, we sanity check order to avoid ever trying to
> @@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	if (gfp_mask & __GFP_NORETRY)
>  		goto noretry;
> 
> -	/* Keep reclaiming pages as long as there is reasonable progress */
> +	/*
> +	 * Do not retry high order allocations unless they are __GFP_REPEAT
> +	 * and even then do not retry endlessly.
> +	 */
>  	pages_reclaimed += did_some_progress;
> -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> -		/* Wait for some write requests to complete then retry */
> -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> -		goto retry;
> +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> +		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
> +			goto noretry;
> +
> +		if (did_some_progress)
> +			goto retry;
> +	}
> +
> +	/*
> +	 * Be optimistic and consider all pages on reclaimable LRUs as usable
> +	 * but make sure we converge to OOM if we cannot make any progress after
> +	 * multiple consecutive failed attempts.
> +	 */
> +	if (did_some_progress)
> +		stall_backoff = 0;
> +	else
> +		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
> +
> +	/*
> +	 * Keep reclaiming pages while there is a chance this will lead somewhere.
> +	 * If none of the target zones can satisfy our allocation request even
> +	 * if all reclaimable pages are considered then we are screwed and have
> +	 * to go OOM.
> +	 */
> +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> +		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
> +		unsigned long reclaimable;
> +		unsigned long target;
> +
> +		reclaimable = zone_reclaimable_pages(zone) +
> +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> +			      zone_page_state(zone, NR_ISOLATED_ANON);
> +		target = reclaimable;
> +		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);

		target = reclaimable - stall_backoff * (1 + target/MAX_STALL_BACKOFF);
		             = reclaimable - stall_backoff - stall_backoff  * (target/MAX_STALL_BACKOFF);

then the first stall_backoff looks unreasonable.
I guess you mean
		target	= reclaimable - target * (stall_backoff/MAX_STALL_BACKOFF);
			= reclaimable - stall_back * (target/MAX_STALL_BACKOFF);

> +		target += free;
> +
> +		/*
> +		 * Would the allocation succeed if we reclaimed the whole target?
> +		 */
> +		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> +				ac->high_zoneidx, alloc_flags, target)) {
> +			/* Wait for some write requests to complete then retry */
> +			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> +			goto retry;
> +		}
>  	}
> 
[...]
/*
> @@ -2734,10 +2730,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
>  		goto retry;
>  	}
> 
> -	/* Any of the zones still reclaimable?  Don't OOM. */
> -	if (zones_reclaimable)
> -		return 1;
> -

Looks cleanup of zones_reclaimable left.
>  	return 0;
>  }
> 
> --
> 2.6.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-10-29 15:17   ` mhocko
@ 2015-10-30  4:18     ` Hillf Danton
  -1 siblings, 0 replies; 50+ messages in thread
From: Hillf Danton @ 2015-10-30  4:18 UTC (permalink / raw)
  To: mhocko, linux-mm
  Cc: 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML', 'Michal Hocko',
	Christoph Lameter


> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  		 */
>  		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
>  				ac->high_zoneidx, alloc_flags, target)) {
> -			/* Wait for some write requests to complete then retry */
> -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> +
> +			if (did_some_progress)
> +				goto retry;
> +
> +			/*
> +			 * If we didn't make any progress and have a lot of
> +			 * dirty + writeback pages then we should wait for
> +			 * an IO to complete to slow down the reclaim and
> +			 * prevent from pre mature OOM
> +			 */
> +			if (2*(writeback + dirty) > reclaimable)
> +				congestion_wait(BLK_RW_ASYNC, HZ/10);
> +			else
> +				cond_resched();
> +

Looks the vmstat updater issue is not addressed.
>  			goto retry;
>  		}
>  	}
> --
> 2.6.1


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
@ 2015-10-30  4:18     ` Hillf Danton
  0 siblings, 0 replies; 50+ messages in thread
From: Hillf Danton @ 2015-10-30  4:18 UTC (permalink / raw)
  To: mhocko, linux-mm
  Cc: 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML', 'Michal Hocko',
	Christoph Lameter


> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  		 */
>  		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
>  				ac->high_zoneidx, alloc_flags, target)) {
> -			/* Wait for some write requests to complete then retry */
> -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> +
> +			if (did_some_progress)
> +				goto retry;
> +
> +			/*
> +			 * If we didn't make any progress and have a lot of
> +			 * dirty + writeback pages then we should wait for
> +			 * an IO to complete to slow down the reclaim and
> +			 * prevent from pre mature OOM
> +			 */
> +			if (2*(writeback + dirty) > reclaimable)
> +				congestion_wait(BLK_RW_ASYNC, HZ/10);
> +			else
> +				cond_resched();
> +

Looks the vmstat updater issue is not addressed.
>  			goto retry;
>  		}
>  	}
> --
> 2.6.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-29 15:17   ` mhocko
@ 2015-10-30  5:23     ` Kamezawa Hiroyuki
  -1 siblings, 0 replies; 50+ messages in thread
From: Kamezawa Hiroyuki @ 2015-10-30  5:23 UTC (permalink / raw)
  To: mhocko, linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML, Michal Hocko

On 2015/10/30 0:17, mhocko@kernel.org wrote:
> From: Michal Hocko <mhocko@suse.com>
> 
> __alloc_pages_slowpath has traditionally relied on the direct reclaim
> and did_some_progress as an indicator that it makes sense to retry
> allocation rather than declaring OOM. shrink_zones had to rely on
> zone_reclaimable if shrink_zone didn't make any progress to prevent
> from pre mature OOM killer invocation - the LRU might be full of dirty
> or writeback pages and direct reclaim cannot clean those up.
> 
> zone_reclaimable will allow to rescan the reclaimable lists several
> times and restart if a page is freed. This is really subtle behavior
> and it might lead to a livelock when a single freed page keeps allocator
> looping but the current task will not be able to allocate that single
> page. OOM killer would be more appropriate than looping without any
> progress for unbounded amount of time.
> 
> This patch changes OOM detection logic and pulls it out from shrink_zone
> which is too low to be appropriate for any high level decisions such as OOM
> which is per zonelist property. It is __alloc_pages_slowpath which knows
> how many attempts have been done and what was the progress so far
> therefore it is more appropriate to implement this logic.
> 
> The new heuristic tries to be more deterministic and easier to follow.
> Retrying makes sense only if the currently reclaimable memory + free
> pages would allow the current allocation request to succeed (as per
> __zone_watermark_ok) at least for one zone in the usable zonelist.
> 
> This alone wouldn't be sufficient, though, because the writeback might
> get stuck and reclaimable pages might be pinned for a really long time
> or even depend on the current allocation context. Therefore there is a
> feedback mechanism implemented which reduces the reclaim target after
> each reclaim round without any progress. This means that we should
> eventually converge to only NR_FREE_PAGES as the target and fail on the
> wmark check and proceed to OOM. The backoff is simple and linear with
> 1/16 of the reclaimable pages for each round without any progress. We
> are optimistic and reset counter for successful reclaim rounds.
> 
> Costly high order pages mostly preserve their semantic and those without
> __GFP_REPEAT fail right away while those which have the flag set will
> back off after the amount of reclaimable pages reaches equivalent of the
> requested order. The only difference is that if there was no progress
> during the reclaim we rely on zone watermark check. This is more logical
> thing to do than previous 1<<order attempts which were a result of
> zone_reclaimable faking the progress.
> 
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
>   include/linux/swap.h |  1 +
>   mm/page_alloc.c      | 69 ++++++++++++++++++++++++++++++++++++++++++++++------
>   mm/vmscan.c          | 10 +-------
>   3 files changed, 64 insertions(+), 16 deletions(-)
> 
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 9c7c4b418498..8298e1dc20f9 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -317,6 +317,7 @@ extern void lru_cache_add_active_or_unevictable(struct page *page,
>   						struct vm_area_struct *vma);
>   
>   /* linux/mm/vmscan.c */
> +extern unsigned long zone_reclaimable_pages(struct zone *zone);
>   extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
>   					gfp_t gfp_mask, nodemask_t *mask);
>   extern int __isolate_lru_page(struct page *page, isolate_mode_t mode);
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c73913648357..9c0abb75ad53 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2972,6 +2972,13 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
>   	return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
>   }
>   
> +/*
> + * Number of backoff steps for potentially reclaimable pages if the direct reclaim
> + * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
> + * reclaimable memory.
> + */
> +#define MAX_STALL_BACKOFF 16
> +
>   static inline struct page *
>   __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>   						struct alloc_context *ac)
> @@ -2984,6 +2991,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>   	enum migrate_mode migration_mode = MIGRATE_ASYNC;
>   	bool deferred_compaction = false;
>   	int contended_compaction = COMPACT_CONTENDED_NONE;
> +	struct zone *zone;
> +	struct zoneref *z;
> +	int stall_backoff = 0;
>   
>   	/*
>   	 * In the slowpath, we sanity check order to avoid ever trying to
> @@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>   	if (gfp_mask & __GFP_NORETRY)
>   		goto noretry;
>   
> -	/* Keep reclaiming pages as long as there is reasonable progress */
> +	/*
> +	 * Do not retry high order allocations unless they are __GFP_REPEAT
> +	 * and even then do not retry endlessly.
> +	 */
>   	pages_reclaimed += did_some_progress;
> -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> -		/* Wait for some write requests to complete then retry */
> -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> -		goto retry;
> +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> +		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
> +			goto noretry;
> +
> +		if (did_some_progress)
> +			goto retry;

why directly retry here ?


> +	}
> +
> +	/*
> +	 * Be optimistic and consider all pages on reclaimable LRUs as usable
> +	 * but make sure we converge to OOM if we cannot make any progress after
> +	 * multiple consecutive failed attempts.
> +	 */
> +	if (did_some_progress)
> +		stall_backoff = 0;
> +	else
> +		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
> +
> +	/*
> +	 * Keep reclaiming pages while there is a chance this will lead somewhere.
> +	 * If none of the target zones can satisfy our allocation request even
> +	 * if all reclaimable pages are considered then we are screwed and have
> +	 * to go OOM.
> +	 */
> +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> +		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
> +		unsigned long reclaimable;
> +		unsigned long target;
> +
> +		reclaimable = zone_reclaimable_pages(zone) +
> +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> +			      zone_page_state(zone, NR_ISOLATED_ANON);
> +		target = reclaimable;
> +		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> +		target += free;
> +
> +		/*
> +		 * Would the allocation succeed if we reclaimed the whole target?
> +		 */
> +		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> +				ac->high_zoneidx, alloc_flags, target)) {
> +			/* Wait for some write requests to complete then retry */
> +			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> +			goto retry;
> +		}
>   	}
>   
>   	/* Reclaim has failed us, start killing things */
> @@ -3150,8 +3203,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>   		goto got_pg;
>   
>   	/* Retry as long as the OOM killer is making progress */
> -	if (did_some_progress)
> +	if (did_some_progress) {
> +		stall_backoff = 0;
>   		goto retry;
> +	}

Umm ? I'm sorry that I didn't notice page allocation may fail even if order < PAGE_ALLOC_COSTLY_ORDER.
I thought old logic ignores did_some_progress. It seems a big change.

So, now, 0-order page allocation may fail in a OOM situation ?

Thanks,
-Kame







^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
@ 2015-10-30  5:23     ` Kamezawa Hiroyuki
  0 siblings, 0 replies; 50+ messages in thread
From: Kamezawa Hiroyuki @ 2015-10-30  5:23 UTC (permalink / raw)
  To: mhocko, linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML, Michal Hocko

On 2015/10/30 0:17, mhocko@kernel.org wrote:
> From: Michal Hocko <mhocko@suse.com>
> 
> __alloc_pages_slowpath has traditionally relied on the direct reclaim
> and did_some_progress as an indicator that it makes sense to retry
> allocation rather than declaring OOM. shrink_zones had to rely on
> zone_reclaimable if shrink_zone didn't make any progress to prevent
> from pre mature OOM killer invocation - the LRU might be full of dirty
> or writeback pages and direct reclaim cannot clean those up.
> 
> zone_reclaimable will allow to rescan the reclaimable lists several
> times and restart if a page is freed. This is really subtle behavior
> and it might lead to a livelock when a single freed page keeps allocator
> looping but the current task will not be able to allocate that single
> page. OOM killer would be more appropriate than looping without any
> progress for unbounded amount of time.
> 
> This patch changes OOM detection logic and pulls it out from shrink_zone
> which is too low to be appropriate for any high level decisions such as OOM
> which is per zonelist property. It is __alloc_pages_slowpath which knows
> how many attempts have been done and what was the progress so far
> therefore it is more appropriate to implement this logic.
> 
> The new heuristic tries to be more deterministic and easier to follow.
> Retrying makes sense only if the currently reclaimable memory + free
> pages would allow the current allocation request to succeed (as per
> __zone_watermark_ok) at least for one zone in the usable zonelist.
> 
> This alone wouldn't be sufficient, though, because the writeback might
> get stuck and reclaimable pages might be pinned for a really long time
> or even depend on the current allocation context. Therefore there is a
> feedback mechanism implemented which reduces the reclaim target after
> each reclaim round without any progress. This means that we should
> eventually converge to only NR_FREE_PAGES as the target and fail on the
> wmark check and proceed to OOM. The backoff is simple and linear with
> 1/16 of the reclaimable pages for each round without any progress. We
> are optimistic and reset counter for successful reclaim rounds.
> 
> Costly high order pages mostly preserve their semantic and those without
> __GFP_REPEAT fail right away while those which have the flag set will
> back off after the amount of reclaimable pages reaches equivalent of the
> requested order. The only difference is that if there was no progress
> during the reclaim we rely on zone watermark check. This is more logical
> thing to do than previous 1<<order attempts which were a result of
> zone_reclaimable faking the progress.
> 
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
>   include/linux/swap.h |  1 +
>   mm/page_alloc.c      | 69 ++++++++++++++++++++++++++++++++++++++++++++++------
>   mm/vmscan.c          | 10 +-------
>   3 files changed, 64 insertions(+), 16 deletions(-)
> 
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 9c7c4b418498..8298e1dc20f9 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -317,6 +317,7 @@ extern void lru_cache_add_active_or_unevictable(struct page *page,
>   						struct vm_area_struct *vma);
>   
>   /* linux/mm/vmscan.c */
> +extern unsigned long zone_reclaimable_pages(struct zone *zone);
>   extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
>   					gfp_t gfp_mask, nodemask_t *mask);
>   extern int __isolate_lru_page(struct page *page, isolate_mode_t mode);
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c73913648357..9c0abb75ad53 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2972,6 +2972,13 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
>   	return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
>   }
>   
> +/*
> + * Number of backoff steps for potentially reclaimable pages if the direct reclaim
> + * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
> + * reclaimable memory.
> + */
> +#define MAX_STALL_BACKOFF 16
> +
>   static inline struct page *
>   __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>   						struct alloc_context *ac)
> @@ -2984,6 +2991,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>   	enum migrate_mode migration_mode = MIGRATE_ASYNC;
>   	bool deferred_compaction = false;
>   	int contended_compaction = COMPACT_CONTENDED_NONE;
> +	struct zone *zone;
> +	struct zoneref *z;
> +	int stall_backoff = 0;
>   
>   	/*
>   	 * In the slowpath, we sanity check order to avoid ever trying to
> @@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>   	if (gfp_mask & __GFP_NORETRY)
>   		goto noretry;
>   
> -	/* Keep reclaiming pages as long as there is reasonable progress */
> +	/*
> +	 * Do not retry high order allocations unless they are __GFP_REPEAT
> +	 * and even then do not retry endlessly.
> +	 */
>   	pages_reclaimed += did_some_progress;
> -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> -		/* Wait for some write requests to complete then retry */
> -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> -		goto retry;
> +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> +		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
> +			goto noretry;
> +
> +		if (did_some_progress)
> +			goto retry;

why directly retry here ?


> +	}
> +
> +	/*
> +	 * Be optimistic and consider all pages on reclaimable LRUs as usable
> +	 * but make sure we converge to OOM if we cannot make any progress after
> +	 * multiple consecutive failed attempts.
> +	 */
> +	if (did_some_progress)
> +		stall_backoff = 0;
> +	else
> +		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
> +
> +	/*
> +	 * Keep reclaiming pages while there is a chance this will lead somewhere.
> +	 * If none of the target zones can satisfy our allocation request even
> +	 * if all reclaimable pages are considered then we are screwed and have
> +	 * to go OOM.
> +	 */
> +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> +		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
> +		unsigned long reclaimable;
> +		unsigned long target;
> +
> +		reclaimable = zone_reclaimable_pages(zone) +
> +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> +			      zone_page_state(zone, NR_ISOLATED_ANON);
> +		target = reclaimable;
> +		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> +		target += free;
> +
> +		/*
> +		 * Would the allocation succeed if we reclaimed the whole target?
> +		 */
> +		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> +				ac->high_zoneidx, alloc_flags, target)) {
> +			/* Wait for some write requests to complete then retry */
> +			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> +			goto retry;
> +		}
>   	}
>   
>   	/* Reclaim has failed us, start killing things */
> @@ -3150,8 +3203,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>   		goto got_pg;
>   
>   	/* Retry as long as the OOM killer is making progress */
> -	if (did_some_progress)
> +	if (did_some_progress) {
> +		stall_backoff = 0;
>   		goto retry;
> +	}

Umm ? I'm sorry that I didn't notice page allocation may fail even if order < PAGE_ALLOC_COSTLY_ORDER.
I thought old logic ignores did_some_progress. It seems a big change.

So, now, 0-order page allocation may fail in a OOM situation ?

Thanks,
-Kame






--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-10-29 15:17   ` mhocko
@ 2015-10-30  5:48     ` Kamezawa Hiroyuki
  -1 siblings, 0 replies; 50+ messages in thread
From: Kamezawa Hiroyuki @ 2015-10-30  5:48 UTC (permalink / raw)
  To: mhocko, linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML, Michal Hocko

On 2015/10/30 0:17, mhocko@kernel.org wrote:
> From: Michal Hocko <mhocko@suse.com>
> 
> wait_iff_congested has been used to throttle allocator before it retried
> another round of direct reclaim to allow the writeback to make some
> progress and prevent reclaim from looping over dirty/writeback pages
> without making any progress. We used to do congestion_wait before
> 0e093d99763e ("writeback: do not sleep on the congestion queue if
> there are no congested BDIs or if significant congestion is not being
> encountered in the current zone") but that led to undesirable stalls
> and sleeping for the full timeout even when the BDI wasn't congested.
> Hence wait_iff_congested was used instead. But it seems that even
> wait_iff_congested doesn't work as expected. We might have a small file
> LRU list with all pages dirty/writeback and yet the bdi is not congested
> so this is just a cond_resched in the end and can end up triggering pre
> mature OOM.
> 
> This patch replaces the unconditional wait_iff_congested by
> congestion_wait which is executed only if we _know_ that the last round
> of direct reclaim didn't make any progress and dirty+writeback pages are
> more than a half of the reclaimable pages on the zone which might be
> usable for our target allocation. This shouldn't reintroduce stalls
> fixed by 0e093d99763e because congestion_wait is called only when we
> are getting hopeless when sleeping is a better choice than OOM with many
> pages under IO.
> 
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
>   mm/page_alloc.c | 19 +++++++++++++++++--
>   1 file changed, 17 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 9c0abb75ad53..0518ca6a9776 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>   		 */
>   		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
>   				ac->high_zoneidx, alloc_flags, target)) {
> -			/* Wait for some write requests to complete then retry */
> -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> +
> +			if (did_some_progress)
> +				goto retry;
> +
> +			/*
> +			 * If we didn't make any progress and have a lot of
> +			 * dirty + writeback pages then we should wait for
> +			 * an IO to complete to slow down the reclaim and
> +			 * prevent from pre mature OOM
> +			 */
> +			if (2*(writeback + dirty) > reclaimable)

Doesn't this add unnecessary latency if other zones have enough clean memory ?


Thanks,
-Kame
 



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
@ 2015-10-30  5:48     ` Kamezawa Hiroyuki
  0 siblings, 0 replies; 50+ messages in thread
From: Kamezawa Hiroyuki @ 2015-10-30  5:48 UTC (permalink / raw)
  To: mhocko, linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML, Michal Hocko

On 2015/10/30 0:17, mhocko@kernel.org wrote:
> From: Michal Hocko <mhocko@suse.com>
> 
> wait_iff_congested has been used to throttle allocator before it retried
> another round of direct reclaim to allow the writeback to make some
> progress and prevent reclaim from looping over dirty/writeback pages
> without making any progress. We used to do congestion_wait before
> 0e093d99763e ("writeback: do not sleep on the congestion queue if
> there are no congested BDIs or if significant congestion is not being
> encountered in the current zone") but that led to undesirable stalls
> and sleeping for the full timeout even when the BDI wasn't congested.
> Hence wait_iff_congested was used instead. But it seems that even
> wait_iff_congested doesn't work as expected. We might have a small file
> LRU list with all pages dirty/writeback and yet the bdi is not congested
> so this is just a cond_resched in the end and can end up triggering pre
> mature OOM.
> 
> This patch replaces the unconditional wait_iff_congested by
> congestion_wait which is executed only if we _know_ that the last round
> of direct reclaim didn't make any progress and dirty+writeback pages are
> more than a half of the reclaimable pages on the zone which might be
> usable for our target allocation. This shouldn't reintroduce stalls
> fixed by 0e093d99763e because congestion_wait is called only when we
> are getting hopeless when sleeping is a better choice than OOM with many
> pages under IO.
> 
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
>   mm/page_alloc.c | 19 +++++++++++++++++--
>   1 file changed, 17 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 9c0abb75ad53..0518ca6a9776 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>   		 */
>   		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
>   				ac->high_zoneidx, alloc_flags, target)) {
> -			/* Wait for some write requests to complete then retry */
> -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> +
> +			if (did_some_progress)
> +				goto retry;
> +
> +			/*
> +			 * If we didn't make any progress and have a lot of
> +			 * dirty + writeback pages then we should wait for
> +			 * an IO to complete to slow down the reclaim and
> +			 * prevent from pre mature OOM
> +			 */
> +			if (2*(writeback + dirty) > reclaimable)

Doesn't this add unnecessary latency if other zones have enough clean memory ?


Thanks,
-Kame
 


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-30  5:23     ` Kamezawa Hiroyuki
@ 2015-10-30  8:23       ` Michal Hocko
  -1 siblings, 0 replies; 50+ messages in thread
From: Michal Hocko @ 2015-10-30  8:23 UTC (permalink / raw)
  To: Kamezawa Hiroyuki
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Rik van Riel, David Rientjes, Tetsuo Handa,
	LKML

On Fri 30-10-15 14:23:59, KAMEZAWA Hiroyuki wrote:
> On 2015/10/30 0:17, mhocko@kernel.org wrote:
[...]
> > @@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >   	if (gfp_mask & __GFP_NORETRY)
> >   		goto noretry;
> >   
> > -	/* Keep reclaiming pages as long as there is reasonable progress */
> > +	/*
> > +	 * Do not retry high order allocations unless they are __GFP_REPEAT
> > +	 * and even then do not retry endlessly.
> > +	 */
> >   	pages_reclaimed += did_some_progress;
> > -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> > -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> > -		/* Wait for some write requests to complete then retry */
> > -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> > -		goto retry;
> > +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> > +		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
> > +			goto noretry;
> > +
> > +		if (did_some_progress)
> > +			goto retry;
> 
> why directly retry here ?

Because I wanted to preserve the previous logic for GFP_REPEAT as much
as possible here and do an incremental change in the later patch.

[...]

> > @@ -3150,8 +3203,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >   		goto got_pg;
> >   
> >   	/* Retry as long as the OOM killer is making progress */
> > -	if (did_some_progress)
> > +	if (did_some_progress) {
> > +		stall_backoff = 0;
> >   		goto retry;
> > +	}
> 
> Umm ? I'm sorry that I didn't notice page allocation may fail even
> if order < PAGE_ALLOC_COSTLY_ORDER.  I thought old logic ignores
> did_some_progress. It seems a big change.

__alloc_pages_may_oom will set did_some_progress

> So, now, 0-order page allocation may fail in a OOM situation ?

No they don't normally and this patch doesn't change the logic here.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
@ 2015-10-30  8:23       ` Michal Hocko
  0 siblings, 0 replies; 50+ messages in thread
From: Michal Hocko @ 2015-10-30  8:23 UTC (permalink / raw)
  To: Kamezawa Hiroyuki
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Rik van Riel, David Rientjes, Tetsuo Handa,
	LKML

On Fri 30-10-15 14:23:59, KAMEZAWA Hiroyuki wrote:
> On 2015/10/30 0:17, mhocko@kernel.org wrote:
[...]
> > @@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >   	if (gfp_mask & __GFP_NORETRY)
> >   		goto noretry;
> >   
> > -	/* Keep reclaiming pages as long as there is reasonable progress */
> > +	/*
> > +	 * Do not retry high order allocations unless they are __GFP_REPEAT
> > +	 * and even then do not retry endlessly.
> > +	 */
> >   	pages_reclaimed += did_some_progress;
> > -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> > -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> > -		/* Wait for some write requests to complete then retry */
> > -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> > -		goto retry;
> > +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> > +		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
> > +			goto noretry;
> > +
> > +		if (did_some_progress)
> > +			goto retry;
> 
> why directly retry here ?

Because I wanted to preserve the previous logic for GFP_REPEAT as much
as possible here and do an incremental change in the later patch.

[...]

> > @@ -3150,8 +3203,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >   		goto got_pg;
> >   
> >   	/* Retry as long as the OOM killer is making progress */
> > -	if (did_some_progress)
> > +	if (did_some_progress) {
> > +		stall_backoff = 0;
> >   		goto retry;
> > +	}
> 
> Umm ? I'm sorry that I didn't notice page allocation may fail even
> if order < PAGE_ALLOC_COSTLY_ORDER.  I thought old logic ignores
> did_some_progress. It seems a big change.

__alloc_pages_may_oom will set did_some_progress

> So, now, 0-order page allocation may fail in a OOM situation ?

No they don't normally and this patch doesn't change the logic here.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-30  4:10     ` Hillf Danton
@ 2015-10-30  8:36       ` Michal Hocko
  -1 siblings, 0 replies; 50+ messages in thread
From: Michal Hocko @ 2015-10-30  8:36 UTC (permalink / raw)
  To: Hillf Danton
  Cc: linux-mm, 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML'

On Fri 30-10-15 12:10:15, Hillf Danton wrote:
[...]
> > +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> > +		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
> > +		unsigned long reclaimable;
> > +		unsigned long target;
> > +
> > +		reclaimable = zone_reclaimable_pages(zone) +
> > +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> > +			      zone_page_state(zone, NR_ISOLATED_ANON);
> > +		target = reclaimable;
> > +		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> 
> 		target = reclaimable - stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> 		             = reclaimable - stall_backoff - stall_backoff  * (target/MAX_STALL_BACKOFF);
> 
> then the first stall_backoff looks unreasonable.

First stall_backoff is off by 1 but that shouldn't make any difference.

> I guess you mean
> 		target	= reclaimable - target * (stall_backoff/MAX_STALL_BACKOFF);
> 			= reclaimable - stall_back * (target/MAX_STALL_BACKOFF);

No the reason I used the bias is to converge for MAX_STALL_BACKOFF. If
you have target which is not divisible by MAX_STALL_BACKOFF then the
rounding would get target > 0 and so we wouldn't converge. With the +1
you underflow which is MAX_STALL_BACKOFF in maximum which should be
fixed up by the free memory. Maybe a check for free < MAX_STALL_BACKOFF
would be good but I didn't get that far with this.

[...]

> /*
> > @@ -2734,10 +2730,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
> >  		goto retry;
> >  	}
> > 
> > -	/* Any of the zones still reclaimable?  Don't OOM. */
> > -	if (zones_reclaimable)
> > -		return 1;
> > -
> 
> Looks cleanup of zones_reclaimable left.

Removed. Thanks!

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
@ 2015-10-30  8:36       ` Michal Hocko
  0 siblings, 0 replies; 50+ messages in thread
From: Michal Hocko @ 2015-10-30  8:36 UTC (permalink / raw)
  To: Hillf Danton
  Cc: linux-mm, 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML'

On Fri 30-10-15 12:10:15, Hillf Danton wrote:
[...]
> > +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> > +		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
> > +		unsigned long reclaimable;
> > +		unsigned long target;
> > +
> > +		reclaimable = zone_reclaimable_pages(zone) +
> > +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> > +			      zone_page_state(zone, NR_ISOLATED_ANON);
> > +		target = reclaimable;
> > +		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> 
> 		target = reclaimable - stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> 		             = reclaimable - stall_backoff - stall_backoff  * (target/MAX_STALL_BACKOFF);
> 
> then the first stall_backoff looks unreasonable.

First stall_backoff is off by 1 but that shouldn't make any difference.

> I guess you mean
> 		target	= reclaimable - target * (stall_backoff/MAX_STALL_BACKOFF);
> 			= reclaimable - stall_back * (target/MAX_STALL_BACKOFF);

No the reason I used the bias is to converge for MAX_STALL_BACKOFF. If
you have target which is not divisible by MAX_STALL_BACKOFF then the
rounding would get target > 0 and so we wouldn't converge. With the +1
you underflow which is MAX_STALL_BACKOFF in maximum which should be
fixed up by the free memory. Maybe a check for free < MAX_STALL_BACKOFF
would be good but I didn't get that far with this.

[...]

> /*
> > @@ -2734,10 +2730,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
> >  		goto retry;
> >  	}
> > 
> > -	/* Any of the zones still reclaimable?  Don't OOM. */
> > -	if (zones_reclaimable)
> > -		return 1;
> > -
> 
> Looks cleanup of zones_reclaimable left.

Removed. Thanks!

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-10-30  4:18     ` Hillf Danton
@ 2015-10-30  8:37       ` Michal Hocko
  -1 siblings, 0 replies; 50+ messages in thread
From: Michal Hocko @ 2015-10-30  8:37 UTC (permalink / raw)
  To: Hillf Danton
  Cc: linux-mm, 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML',
	Christoph Lameter

On Fri 30-10-15 12:18:50, Hillf Danton wrote:
> 
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >  		 */
> >  		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> >  				ac->high_zoneidx, alloc_flags, target)) {
> > -			/* Wait for some write requests to complete then retry */
> > -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> > +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> > +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> > +
> > +			if (did_some_progress)
> > +				goto retry;
> > +
> > +			/*
> > +			 * If we didn't make any progress and have a lot of
> > +			 * dirty + writeback pages then we should wait for
> > +			 * an IO to complete to slow down the reclaim and
> > +			 * prevent from pre mature OOM
> > +			 */
> > +			if (2*(writeback + dirty) > reclaimable)
> > +				congestion_wait(BLK_RW_ASYNC, HZ/10);
> > +			else
> > +				cond_resched();
> > +
> 
> Looks the vmstat updater issue is not addressed.

This is not the aim of this patch set.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
@ 2015-10-30  8:37       ` Michal Hocko
  0 siblings, 0 replies; 50+ messages in thread
From: Michal Hocko @ 2015-10-30  8:37 UTC (permalink / raw)
  To: Hillf Danton
  Cc: linux-mm, 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML',
	Christoph Lameter

On Fri 30-10-15 12:18:50, Hillf Danton wrote:
> 
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >  		 */
> >  		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> >  				ac->high_zoneidx, alloc_flags, target)) {
> > -			/* Wait for some write requests to complete then retry */
> > -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> > +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> > +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> > +
> > +			if (did_some_progress)
> > +				goto retry;
> > +
> > +			/*
> > +			 * If we didn't make any progress and have a lot of
> > +			 * dirty + writeback pages then we should wait for
> > +			 * an IO to complete to slow down the reclaim and
> > +			 * prevent from pre mature OOM
> > +			 */
> > +			if (2*(writeback + dirty) > reclaimable)
> > +				congestion_wait(BLK_RW_ASYNC, HZ/10);
> > +			else
> > +				cond_resched();
> > +
> 
> Looks the vmstat updater issue is not addressed.

This is not the aim of this patch set.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-10-30  5:48     ` Kamezawa Hiroyuki
@ 2015-10-30  8:38       ` Michal Hocko
  -1 siblings, 0 replies; 50+ messages in thread
From: Michal Hocko @ 2015-10-30  8:38 UTC (permalink / raw)
  To: Kamezawa Hiroyuki
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Rik van Riel, David Rientjes, Tetsuo Handa,
	LKML

On Fri 30-10-15 14:48:40, KAMEZAWA Hiroyuki wrote:
[...]
> > @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >   		 */
> >   		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> >   				ac->high_zoneidx, alloc_flags, target)) {
> > -			/* Wait for some write requests to complete then retry */
> > -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> > +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> > +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> > +
> > +			if (did_some_progress)
> > +				goto retry;
> > +
> > +			/*
> > +			 * If we didn't make any progress and have a lot of
> > +			 * dirty + writeback pages then we should wait for
> > +			 * an IO to complete to slow down the reclaim and
> > +			 * prevent from pre mature OOM
> > +			 */
> > +			if (2*(writeback + dirty) > reclaimable)
> 
> Doesn't this add unnecessary latency if other zones have enough clean memory ?

We know we haven't made any progress the last reclaim round so any zone
with a clean memory is rather unlikely.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
@ 2015-10-30  8:38       ` Michal Hocko
  0 siblings, 0 replies; 50+ messages in thread
From: Michal Hocko @ 2015-10-30  8:38 UTC (permalink / raw)
  To: Kamezawa Hiroyuki
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Rik van Riel, David Rientjes, Tetsuo Handa,
	LKML

On Fri 30-10-15 14:48:40, KAMEZAWA Hiroyuki wrote:
[...]
> > @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >   		 */
> >   		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> >   				ac->high_zoneidx, alloc_flags, target)) {
> > -			/* Wait for some write requests to complete then retry */
> > -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> > +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> > +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> > +
> > +			if (did_some_progress)
> > +				goto retry;
> > +
> > +			/*
> > +			 * If we didn't make any progress and have a lot of
> > +			 * dirty + writeback pages then we should wait for
> > +			 * an IO to complete to slow down the reclaim and
> > +			 * prevent from pre mature OOM
> > +			 */
> > +			if (2*(writeback + dirty) > reclaimable)
> 
> Doesn't this add unnecessary latency if other zones have enough clean memory ?

We know we haven't made any progress the last reclaim round so any zone
with a clean memory is rather unlikely.

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-30  8:23       ` Michal Hocko
@ 2015-10-30  9:41         ` Kamezawa Hiroyuki
  -1 siblings, 0 replies; 50+ messages in thread
From: Kamezawa Hiroyuki @ 2015-10-30  9:41 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Rik van Riel, David Rientjes, Tetsuo Handa,
	LKML

On 2015/10/30 17:23, Michal Hocko wrote:
> On Fri 30-10-15 14:23:59, KAMEZAWA Hiroyuki wrote:
>> On 2015/10/30 0:17, mhocko@kernel.org wrote:
> [...]
>>> @@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>>>    	if (gfp_mask & __GFP_NORETRY)
>>>    		goto noretry;
>>>
>>> -	/* Keep reclaiming pages as long as there is reasonable progress */
>>> +	/*
>>> +	 * Do not retry high order allocations unless they are __GFP_REPEAT
>>> +	 * and even then do not retry endlessly.
>>> +	 */
>>>    	pages_reclaimed += did_some_progress;
>>> -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
>>> -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
>>> -		/* Wait for some write requests to complete then retry */
>>> -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
>>> -		goto retry;
>>> +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
>>> +		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
>>> +			goto noretry;
>>> +
>>> +		if (did_some_progress)
>>> +			goto retry;
>>
>> why directly retry here ?
>
> Because I wanted to preserve the previous logic for GFP_REPEAT as much
> as possible here and do an incremental change in the later patch.
>

I see.

> [...]
>
>>> @@ -3150,8 +3203,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>>>    		goto got_pg;
>>>
>>>    	/* Retry as long as the OOM killer is making progress */
>>> -	if (did_some_progress)
>>> +	if (did_some_progress) {
>>> +		stall_backoff = 0;
>>>    		goto retry;
>>> +	}
>>
>> Umm ? I'm sorry that I didn't notice page allocation may fail even
>> if order < PAGE_ALLOC_COSTLY_ORDER.  I thought old logic ignores
>> did_some_progress. It seems a big change.
>
> __alloc_pages_may_oom will set did_some_progress
>
>> So, now, 0-order page allocation may fail in a OOM situation ?
>
> No they don't normally and this patch doesn't change the logic here.
>

I understand your patch doesn't change the behavior.
Looking into __alloc_pages_may_oom(), *did_some_progress is finally set by

      if (out_of_memory(&oc) || WARN_ON_ONCE(gfp_mask & __GFP_NOFAIL))
                 *did_some_progress = 1;

...depends on out_of_memory() return value.
Now, allocation may fail if oom-killer is disabled.... Isn't it complicated ?

Shouldn't we have

  if (order < PAGE_ALLOC_COSTLY_ORDER)
     goto retry;

here ?

Thanks,
-Kame







^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
@ 2015-10-30  9:41         ` Kamezawa Hiroyuki
  0 siblings, 0 replies; 50+ messages in thread
From: Kamezawa Hiroyuki @ 2015-10-30  9:41 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Rik van Riel, David Rientjes, Tetsuo Handa,
	LKML

On 2015/10/30 17:23, Michal Hocko wrote:
> On Fri 30-10-15 14:23:59, KAMEZAWA Hiroyuki wrote:
>> On 2015/10/30 0:17, mhocko@kernel.org wrote:
> [...]
>>> @@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>>>    	if (gfp_mask & __GFP_NORETRY)
>>>    		goto noretry;
>>>
>>> -	/* Keep reclaiming pages as long as there is reasonable progress */
>>> +	/*
>>> +	 * Do not retry high order allocations unless they are __GFP_REPEAT
>>> +	 * and even then do not retry endlessly.
>>> +	 */
>>>    	pages_reclaimed += did_some_progress;
>>> -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
>>> -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
>>> -		/* Wait for some write requests to complete then retry */
>>> -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
>>> -		goto retry;
>>> +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
>>> +		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
>>> +			goto noretry;
>>> +
>>> +		if (did_some_progress)
>>> +			goto retry;
>>
>> why directly retry here ?
>
> Because I wanted to preserve the previous logic for GFP_REPEAT as much
> as possible here and do an incremental change in the later patch.
>

I see.

> [...]
>
>>> @@ -3150,8 +3203,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>>>    		goto got_pg;
>>>
>>>    	/* Retry as long as the OOM killer is making progress */
>>> -	if (did_some_progress)
>>> +	if (did_some_progress) {
>>> +		stall_backoff = 0;
>>>    		goto retry;
>>> +	}
>>
>> Umm ? I'm sorry that I didn't notice page allocation may fail even
>> if order < PAGE_ALLOC_COSTLY_ORDER.  I thought old logic ignores
>> did_some_progress. It seems a big change.
>
> __alloc_pages_may_oom will set did_some_progress
>
>> So, now, 0-order page allocation may fail in a OOM situation ?
>
> No they don't normally and this patch doesn't change the logic here.
>

I understand your patch doesn't change the behavior.
Looking into __alloc_pages_may_oom(), *did_some_progress is finally set by

      if (out_of_memory(&oc) || WARN_ON_ONCE(gfp_mask & __GFP_NOFAIL))
                 *did_some_progress = 1;

...depends on out_of_memory() return value.
Now, allocation may fail if oom-killer is disabled.... Isn't it complicated ?

Shouldn't we have

  if (order < PAGE_ALLOC_COSTLY_ORDER)
     goto retry;

here ?

Thanks,
-Kame






--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-30  8:36       ` Michal Hocko
@ 2015-10-30 10:14         ` Michal Hocko
  -1 siblings, 0 replies; 50+ messages in thread
From: Michal Hocko @ 2015-10-30 10:14 UTC (permalink / raw)
  To: Hillf Danton
  Cc: linux-mm, 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML'

On Fri 30-10-15 09:36:26, Michal Hocko wrote:
> On Fri 30-10-15 12:10:15, Hillf Danton wrote:
> [...]
> > > +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> > > +		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
> > > +		unsigned long reclaimable;
> > > +		unsigned long target;
> > > +
> > > +		reclaimable = zone_reclaimable_pages(zone) +
> > > +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> > > +			      zone_page_state(zone, NR_ISOLATED_ANON);
> > > +		target = reclaimable;
> > > +		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> > 
> > 		target = reclaimable - stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> > 		             = reclaimable - stall_backoff - stall_backoff  * (target/MAX_STALL_BACKOFF);
> > 
> > then the first stall_backoff looks unreasonable.
> 
> First stall_backoff is off by 1 but that shouldn't make any difference.
> 
> > I guess you mean
> > 		target	= reclaimable - target * (stall_backoff/MAX_STALL_BACKOFF);
> > 			= reclaimable - stall_back * (target/MAX_STALL_BACKOFF);
> 
> No the reason I used the bias is to converge for MAX_STALL_BACKOFF. If
> you have target which is not divisible by MAX_STALL_BACKOFF then the
> rounding would get target > 0 and so we wouldn't converge. With the +1
> you underflow which is MAX_STALL_BACKOFF in maximum which should be
> fixed up by the free memory. Maybe a check for free < MAX_STALL_BACKOFF
> would be good but I didn't get that far with this.

I've ended up with the following after all. It uses ceiling for the
division this should be underflow safe albeit less readable (at least
for me).
---
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0dc1ca9b1219..c9a4e62f234e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3176,7 +3176,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 			      zone_page_state(zone, NR_ISOLATED_FILE) +
 			      zone_page_state(zone, NR_ISOLATED_ANON);
 		target = reclaimable;
-		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
+		target -= (stall_backoff * target + MAX_STALL_BACKOFF - 1) / MAX_STALL_BACKOFF;
 		target += free;
 
 		/*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index bc14217acd47..0b3ec972ec7a 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2672,7 +2672,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 	int initial_priority = sc->priority;
 	unsigned long total_scanned = 0;
 	unsigned long writeback_threshold;
-	bool zones_reclaimable;
 retry:
 	delayacct_freepages_start();
 
@@ -2683,7 +2682,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup,
 				sc->priority);
 		sc->nr_scanned = 0;
-		zones_reclaimable = shrink_zones(zonelist, sc);
+		shrink_zones(zonelist, sc);
 
 		total_scanned += sc->nr_scanned;
 		if (sc->nr_reclaimed >= sc->nr_to_reclaim)
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
@ 2015-10-30 10:14         ` Michal Hocko
  0 siblings, 0 replies; 50+ messages in thread
From: Michal Hocko @ 2015-10-30 10:14 UTC (permalink / raw)
  To: Hillf Danton
  Cc: linux-mm, 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML'

On Fri 30-10-15 09:36:26, Michal Hocko wrote:
> On Fri 30-10-15 12:10:15, Hillf Danton wrote:
> [...]
> > > +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> > > +		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
> > > +		unsigned long reclaimable;
> > > +		unsigned long target;
> > > +
> > > +		reclaimable = zone_reclaimable_pages(zone) +
> > > +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> > > +			      zone_page_state(zone, NR_ISOLATED_ANON);
> > > +		target = reclaimable;
> > > +		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> > 
> > 		target = reclaimable - stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> > 		             = reclaimable - stall_backoff - stall_backoff  * (target/MAX_STALL_BACKOFF);
> > 
> > then the first stall_backoff looks unreasonable.
> 
> First stall_backoff is off by 1 but that shouldn't make any difference.
> 
> > I guess you mean
> > 		target	= reclaimable - target * (stall_backoff/MAX_STALL_BACKOFF);
> > 			= reclaimable - stall_back * (target/MAX_STALL_BACKOFF);
> 
> No the reason I used the bias is to converge for MAX_STALL_BACKOFF. If
> you have target which is not divisible by MAX_STALL_BACKOFF then the
> rounding would get target > 0 and so we wouldn't converge. With the +1
> you underflow which is MAX_STALL_BACKOFF in maximum which should be
> fixed up by the free memory. Maybe a check for free < MAX_STALL_BACKOFF
> would be good but I didn't get that far with this.

I've ended up with the following after all. It uses ceiling for the
division this should be underflow safe albeit less readable (at least
for me).
---
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0dc1ca9b1219..c9a4e62f234e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3176,7 +3176,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 			      zone_page_state(zone, NR_ISOLATED_FILE) +
 			      zone_page_state(zone, NR_ISOLATED_ANON);
 		target = reclaimable;
-		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
+		target -= (stall_backoff * target + MAX_STALL_BACKOFF - 1) / MAX_STALL_BACKOFF;
 		target += free;
 
 		/*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index bc14217acd47..0b3ec972ec7a 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2672,7 +2672,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 	int initial_priority = sc->priority;
 	unsigned long total_scanned = 0;
 	unsigned long writeback_threshold;
-	bool zones_reclaimable;
 retry:
 	delayacct_freepages_start();
 
@@ -2683,7 +2682,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup,
 				sc->priority);
 		sc->nr_scanned = 0;
-		zones_reclaimable = shrink_zones(zonelist, sc);
+		shrink_zones(zonelist, sc);
 
 		total_scanned += sc->nr_scanned;
 		if (sc->nr_reclaimed >= sc->nr_to_reclaim)
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-30  9:41         ` Kamezawa Hiroyuki
@ 2015-10-30 10:18           ` Michal Hocko
  -1 siblings, 0 replies; 50+ messages in thread
From: Michal Hocko @ 2015-10-30 10:18 UTC (permalink / raw)
  To: Kamezawa Hiroyuki
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Rik van Riel, David Rientjes, Tetsuo Handa,
	LKML

On Fri 30-10-15 18:41:30, KAMEZAWA Hiroyuki wrote:
[...]
> >>So, now, 0-order page allocation may fail in a OOM situation ?
> >
> >No they don't normally and this patch doesn't change the logic here.
> >
> 
> I understand your patch doesn't change the behavior.
> Looking into __alloc_pages_may_oom(), *did_some_progress is finally set by
> 
>      if (out_of_memory(&oc) || WARN_ON_ONCE(gfp_mask & __GFP_NOFAIL))
>                 *did_some_progress = 1;
> 
> ...depends on out_of_memory() return value.
> Now, allocation may fail if oom-killer is disabled.... Isn't it complicated ?

Yes and there shouldn't be any allocations after OOM killer has been
disabled. The userspace is already frozen and there shouldn't be any
other memory activity.
 
> Shouldn't we have
> 
>  if (order < PAGE_ALLOC_COSTLY_ORDER)
>     goto retry;
> 
> here ?

How could we move on during the suspend if the reclaim doesn't proceed
and we cannot really kill anything to free up memory resources. We are
simply past the moment any userspace can be woken up. Anyway this is
tangent to this particular patch series.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
@ 2015-10-30 10:18           ` Michal Hocko
  0 siblings, 0 replies; 50+ messages in thread
From: Michal Hocko @ 2015-10-30 10:18 UTC (permalink / raw)
  To: Kamezawa Hiroyuki
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Rik van Riel, David Rientjes, Tetsuo Handa,
	LKML

On Fri 30-10-15 18:41:30, KAMEZAWA Hiroyuki wrote:
[...]
> >>So, now, 0-order page allocation may fail in a OOM situation ?
> >
> >No they don't normally and this patch doesn't change the logic here.
> >
> 
> I understand your patch doesn't change the behavior.
> Looking into __alloc_pages_may_oom(), *did_some_progress is finally set by
> 
>      if (out_of_memory(&oc) || WARN_ON_ONCE(gfp_mask & __GFP_NOFAIL))
>                 *did_some_progress = 1;
> 
> ...depends on out_of_memory() return value.
> Now, allocation may fail if oom-killer is disabled.... Isn't it complicated ?

Yes and there shouldn't be any allocations after OOM killer has been
disabled. The userspace is already frozen and there shouldn't be any
other memory activity.
 
> Shouldn't we have
> 
>  if (order < PAGE_ALLOC_COSTLY_ORDER)
>     goto retry;
> 
> here ?

How could we move on during the suspend if the reclaim doesn't proceed
and we cannot really kill anything to free up memory resources. We are
simply past the moment any userspace can be woken up. Anyway this is
tangent to this particular patch series.

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-30 10:14         ` Michal Hocko
@ 2015-10-30 13:32           ` Tetsuo Handa
  -1 siblings, 0 replies; 50+ messages in thread
From: Tetsuo Handa @ 2015-10-30 13:32 UTC (permalink / raw)
  To: mhocko
  Cc: hillf.zj, linux-mm, akpm, torvalds, mgorman, hannes, riel,
	rientjes, linux-kernel

Michal Hocko wrote:
> +		target -= (stall_backoff * target + MAX_STALL_BACKOFF - 1) / MAX_STALL_BACKOFF;
target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);



Michal Hocko wrote:
> This alone wouldn't be sufficient, though, because the writeback might
> get stuck and reclaimable pages might be pinned for a really long time
> or even depend on the current allocation context.

Is this a dependency which I worried at
http://lkml.kernel.org/r/201510262044.BAI43236.FOMSFFOtOVLJQH@I-love.SAKURA.ne.jp ?

>                                                   Therefore there is a
> feedback mechanism implemented which reduces the reclaim target after
> each reclaim round without any progress.

If yes, this feedback mechanism will help avoiding infinite wait loop.

>                                          This means that we should
> eventually converge to only NR_FREE_PAGES as the target and fail on the
> wmark check and proceed to OOM.

What if all in-flight allocation requests are !__GFP_NOFAIL && !__GFP_FS ?
(In other words, either "no __GFP_FS allocations are in-flight" or "all
__GFP_FS allocations are in-flight but are either waiting for completion
of operations which depend on !__GFP_FS allocations with a lock held or
waiting for that lock to be released".)

Don't we need to call out_of_memory() even though !__GFP_FS allocations?

>                                 The backoff is simple and linear with
> 1/16 of the reclaimable pages for each round without any progress. We
> are optimistic and reset counter for successful reclaim rounds.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
@ 2015-10-30 13:32           ` Tetsuo Handa
  0 siblings, 0 replies; 50+ messages in thread
From: Tetsuo Handa @ 2015-10-30 13:32 UTC (permalink / raw)
  To: mhocko
  Cc: hillf.zj, linux-mm, akpm, torvalds, mgorman, hannes, riel,
	rientjes, linux-kernel

Michal Hocko wrote:
> +		target -= (stall_backoff * target + MAX_STALL_BACKOFF - 1) / MAX_STALL_BACKOFF;
target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);



Michal Hocko wrote:
> This alone wouldn't be sufficient, though, because the writeback might
> get stuck and reclaimable pages might be pinned for a really long time
> or even depend on the current allocation context.

Is this a dependency which I worried at
http://lkml.kernel.org/r/201510262044.BAI43236.FOMSFFOtOVLJQH@I-love.SAKURA.ne.jp ?

>                                                   Therefore there is a
> feedback mechanism implemented which reduces the reclaim target after
> each reclaim round without any progress.

If yes, this feedback mechanism will help avoiding infinite wait loop.

>                                          This means that we should
> eventually converge to only NR_FREE_PAGES as the target and fail on the
> wmark check and proceed to OOM.

What if all in-flight allocation requests are !__GFP_NOFAIL && !__GFP_FS ?
(In other words, either "no __GFP_FS allocations are in-flight" or "all
__GFP_FS allocations are in-flight but are either waiting for completion
of operations which depend on !__GFP_FS allocations with a lock held or
waiting for that lock to be released".)

Don't we need to call out_of_memory() even though !__GFP_FS allocations?

>                                 The backoff is simple and linear with
> 1/16 of the reclaimable pages for each round without any progress. We
> are optimistic and reset counter for successful reclaim rounds.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-30 13:32           ` Tetsuo Handa
@ 2015-10-30 14:55             ` Michal Hocko
  -1 siblings, 0 replies; 50+ messages in thread
From: Michal Hocko @ 2015-10-30 14:55 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: hillf.zj, linux-mm, akpm, torvalds, mgorman, hannes, riel,
	rientjes, linux-kernel

On Fri 30-10-15 22:32:27, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > +		target -= (stall_backoff * target + MAX_STALL_BACKOFF - 1) / MAX_STALL_BACKOFF;
> target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);

Ohh, we have a macro for that. Good to know. Thanks. It sure looks much
easier to follow.
 
> Michal Hocko wrote:
> > This alone wouldn't be sufficient, though, because the writeback might
> > get stuck and reclaimable pages might be pinned for a really long time
> > or even depend on the current allocation context.
> 
> Is this a dependency which I worried at
> http://lkml.kernel.org/r/201510262044.BAI43236.FOMSFFOtOVLJQH@I-love.SAKURA.ne.jp ?

Yes, I had restricted allocation contexts in mind here.

> >                                                   Therefore there is a
> > feedback mechanism implemented which reduces the reclaim target after
> > each reclaim round without any progress.
> 
> If yes, this feedback mechanism will help avoiding infinite wait loop.
> 
> >                                          This means that we should
> > eventually converge to only NR_FREE_PAGES as the target and fail on the
> > wmark check and proceed to OOM.
> 
> What if all in-flight allocation requests are !__GFP_NOFAIL && !__GFP_FS ?

Then we will loop like crazy hoping that _something_ will reclaim memory
for us. Same as we do now.

> (In other words, either "no __GFP_FS allocations are in-flight" or "all
> __GFP_FS allocations are in-flight but are either waiting for completion
> of operations which depend on !__GFP_FS allocations with a lock held or
> waiting for that lock to be released".)
> 
> Don't we need to call out_of_memory() even though !__GFP_FS allocations?

I do not think this is in scope of this patch series. I am trying to
normalize the OOM detection and GFP_FS is a separate beast and we do not
have enough counters to decide the whether OOM killer would be
premature or not (e.g. we do not know how much memory is unreclaimable
just because of NOFS context). I am convinced that GFP_FS simply has to
fail the allocation as I've suggested quite some time ago and plan to
revisit it soon(ish). I consider the two orthogonal.

Thanks!
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
@ 2015-10-30 14:55             ` Michal Hocko
  0 siblings, 0 replies; 50+ messages in thread
From: Michal Hocko @ 2015-10-30 14:55 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: hillf.zj, linux-mm, akpm, torvalds, mgorman, hannes, riel,
	rientjes, linux-kernel

On Fri 30-10-15 22:32:27, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > +		target -= (stall_backoff * target + MAX_STALL_BACKOFF - 1) / MAX_STALL_BACKOFF;
> target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);

Ohh, we have a macro for that. Good to know. Thanks. It sure looks much
easier to follow.
 
> Michal Hocko wrote:
> > This alone wouldn't be sufficient, though, because the writeback might
> > get stuck and reclaimable pages might be pinned for a really long time
> > or even depend on the current allocation context.
> 
> Is this a dependency which I worried at
> http://lkml.kernel.org/r/201510262044.BAI43236.FOMSFFOtOVLJQH@I-love.SAKURA.ne.jp ?

Yes, I had restricted allocation contexts in mind here.

> >                                                   Therefore there is a
> > feedback mechanism implemented which reduces the reclaim target after
> > each reclaim round without any progress.
> 
> If yes, this feedback mechanism will help avoiding infinite wait loop.
> 
> >                                          This means that we should
> > eventually converge to only NR_FREE_PAGES as the target and fail on the
> > wmark check and proceed to OOM.
> 
> What if all in-flight allocation requests are !__GFP_NOFAIL && !__GFP_FS ?

Then we will loop like crazy hoping that _something_ will reclaim memory
for us. Same as we do now.

> (In other words, either "no __GFP_FS allocations are in-flight" or "all
> __GFP_FS allocations are in-flight but are either waiting for completion
> of operations which depend on !__GFP_FS allocations with a lock held or
> waiting for that lock to be released".)
> 
> Don't we need to call out_of_memory() even though !__GFP_FS allocations?

I do not think this is in scope of this patch series. I am trying to
normalize the OOM detection and GFP_FS is a separate beast and we do not
have enough counters to decide the whether OOM killer would be
premature or not (e.g. we do not know how much memory is unreclaimable
just because of NOFS context). I am convinced that GFP_FS simply has to
fail the allocation as I've suggested quite some time ago and plan to
revisit it soon(ish). I consider the two orthogonal.

Thanks!
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-30 10:14         ` Michal Hocko
@ 2015-10-31  3:57           ` Hillf Danton
  -1 siblings, 0 replies; 50+ messages in thread
From: Hillf Danton @ 2015-10-31  3:57 UTC (permalink / raw)
  To: 'Michal Hocko'
  Cc: linux-mm, 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML'

> On Fri 30-10-15 09:36:26, Michal Hocko wrote:
> > On Fri 30-10-15 12:10:15, Hillf Danton wrote:
> > [...]
> > > > +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> > > > +		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
> > > > +		unsigned long reclaimable;
> > > > +		unsigned long target;
> > > > +
> > > > +		reclaimable = zone_reclaimable_pages(zone) +
> > > > +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> > > > +			      zone_page_state(zone, NR_ISOLATED_ANON);
> > > > +		target = reclaimable;
> > > > +		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> > >
> > > 		target = reclaimable - stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> > > 		             = reclaimable - stall_backoff - stall_backoff  * (target/MAX_STALL_BACKOFF);
> > >
> > > then the first stall_backoff looks unreasonable.
> >
> > First stall_backoff is off by 1 but that shouldn't make any difference.
> >
> > > I guess you mean
> > > 		target	= reclaimable - target * (stall_backoff/MAX_STALL_BACKOFF);
> > > 			= reclaimable - stall_back * (target/MAX_STALL_BACKOFF);
> >
> > No the reason I used the bias is to converge for MAX_STALL_BACKOFF. If
> > you have target which is not divisible by MAX_STALL_BACKOFF then the
> > rounding would get target > 0 and so we wouldn't converge. With the +1
> > you underflow which is MAX_STALL_BACKOFF in maximum which should be
> > fixed up by the free memory. Maybe a check for free < MAX_STALL_BACKOFF
> > would be good but I didn't get that far with this.
> 
> I've ended up with the following after all. It uses ceiling for the
> division this should be underflow safe albeit less readable (at least
> for me).

Looks good, thanks.

Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>

> ---
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 0dc1ca9b1219..c9a4e62f234e 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3176,7 +3176,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  			      zone_page_state(zone, NR_ISOLATED_FILE) +
>  			      zone_page_state(zone, NR_ISOLATED_ANON);
>  		target = reclaimable;
> -		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> +		target -= (stall_backoff * target + MAX_STALL_BACKOFF - 1) / MAX_STALL_BACKOFF;
>  		target += free;
> 
>  		/*
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index bc14217acd47..0b3ec972ec7a 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2672,7 +2672,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
>  	int initial_priority = sc->priority;
>  	unsigned long total_scanned = 0;
>  	unsigned long writeback_threshold;
> -	bool zones_reclaimable;
>  retry:
>  	delayacct_freepages_start();
> 
> @@ -2683,7 +2682,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
>  		vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup,
>  				sc->priority);
>  		sc->nr_scanned = 0;
> -		zones_reclaimable = shrink_zones(zonelist, sc);
> +		shrink_zones(zonelist, sc);
> 
>  		total_scanned += sc->nr_scanned;
>  		if (sc->nr_reclaimed >= sc->nr_to_reclaim)
> --
> Michal Hocko
> SUSE Labs


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
@ 2015-10-31  3:57           ` Hillf Danton
  0 siblings, 0 replies; 50+ messages in thread
From: Hillf Danton @ 2015-10-31  3:57 UTC (permalink / raw)
  To: 'Michal Hocko'
  Cc: linux-mm, 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML'

> On Fri 30-10-15 09:36:26, Michal Hocko wrote:
> > On Fri 30-10-15 12:10:15, Hillf Danton wrote:
> > [...]
> > > > +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> > > > +		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
> > > > +		unsigned long reclaimable;
> > > > +		unsigned long target;
> > > > +
> > > > +		reclaimable = zone_reclaimable_pages(zone) +
> > > > +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> > > > +			      zone_page_state(zone, NR_ISOLATED_ANON);
> > > > +		target = reclaimable;
> > > > +		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> > >
> > > 		target = reclaimable - stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> > > 		             = reclaimable - stall_backoff - stall_backoff  * (target/MAX_STALL_BACKOFF);
> > >
> > > then the first stall_backoff looks unreasonable.
> >
> > First stall_backoff is off by 1 but that shouldn't make any difference.
> >
> > > I guess you mean
> > > 		target	= reclaimable - target * (stall_backoff/MAX_STALL_BACKOFF);
> > > 			= reclaimable - stall_back * (target/MAX_STALL_BACKOFF);
> >
> > No the reason I used the bias is to converge for MAX_STALL_BACKOFF. If
> > you have target which is not divisible by MAX_STALL_BACKOFF then the
> > rounding would get target > 0 and so we wouldn't converge. With the +1
> > you underflow which is MAX_STALL_BACKOFF in maximum which should be
> > fixed up by the free memory. Maybe a check for free < MAX_STALL_BACKOFF
> > would be good but I didn't get that far with this.
> 
> I've ended up with the following after all. It uses ceiling for the
> division this should be underflow safe albeit less readable (at least
> for me).

Looks good, thanks.

Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>

> ---
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 0dc1ca9b1219..c9a4e62f234e 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3176,7 +3176,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  			      zone_page_state(zone, NR_ISOLATED_FILE) +
>  			      zone_page_state(zone, NR_ISOLATED_ANON);
>  		target = reclaimable;
> -		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> +		target -= (stall_backoff * target + MAX_STALL_BACKOFF - 1) / MAX_STALL_BACKOFF;
>  		target += free;
> 
>  		/*
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index bc14217acd47..0b3ec972ec7a 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2672,7 +2672,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
>  	int initial_priority = sc->priority;
>  	unsigned long total_scanned = 0;
>  	unsigned long writeback_threshold;
> -	bool zones_reclaimable;
>  retry:
>  	delayacct_freepages_start();
> 
> @@ -2683,7 +2682,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
>  		vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup,
>  				sc->priority);
>  		sc->nr_scanned = 0;
> -		zones_reclaimable = shrink_zones(zonelist, sc);
> +		shrink_zones(zonelist, sc);
> 
>  		total_scanned += sc->nr_scanned;
>  		if (sc->nr_reclaimed >= sc->nr_to_reclaim)
> --
> Michal Hocko
> SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-29 15:17   ` mhocko
@ 2015-11-12 12:39     ` Michal Hocko
  -1 siblings, 0 replies; 50+ messages in thread
From: Michal Hocko @ 2015-11-12 12:39 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML

On Thu 29-10-15 16:17:13, mhocko@kernel.org wrote:
[...]
> @@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	if (gfp_mask & __GFP_NORETRY)
>  		goto noretry;
>  
> -	/* Keep reclaiming pages as long as there is reasonable progress */
> +	/*
> +	 * Do not retry high order allocations unless they are __GFP_REPEAT
> +	 * and even then do not retry endlessly.
> +	 */
>  	pages_reclaimed += did_some_progress;
> -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> -		/* Wait for some write requests to complete then retry */
> -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> -		goto retry;
> +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> +		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
> +			goto noretry;

This is not correct because we could fail __GFP_NOFAIL allocation. It
should do
		if (!(gfp_mask & __GFP_NOFAIL) &&
		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
			goto noretry;

It looks rather ugly but it will get much better with patch3 which
reorganizes the code a bit

> +
> +		if (did_some_progress)
> +			goto retry;
> +	}

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
@ 2015-11-12 12:39     ` Michal Hocko
  0 siblings, 0 replies; 50+ messages in thread
From: Michal Hocko @ 2015-11-12 12:39 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML

On Thu 29-10-15 16:17:13, mhocko@kernel.org wrote:
[...]
> @@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	if (gfp_mask & __GFP_NORETRY)
>  		goto noretry;
>  
> -	/* Keep reclaiming pages as long as there is reasonable progress */
> +	/*
> +	 * Do not retry high order allocations unless they are __GFP_REPEAT
> +	 * and even then do not retry endlessly.
> +	 */
>  	pages_reclaimed += did_some_progress;
> -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> -		/* Wait for some write requests to complete then retry */
> -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> -		goto retry;
> +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> +		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
> +			goto noretry;

This is not correct because we could fail __GFP_NOFAIL allocation. It
should do
		if (!(gfp_mask & __GFP_NOFAIL) &&
		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
			goto noretry;

It looks rather ugly but it will get much better with patch3 which
reorganizes the code a bit

> +
> +		if (did_some_progress)
> +			goto retry;
> +	}

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: RFC: OOM detection rework v1
  2015-10-29 15:17 ` mhocko
@ 2015-11-12 12:44   ` Michal Hocko
  -1 siblings, 0 replies; 50+ messages in thread
From: Michal Hocko @ 2015-11-12 12:44 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML

Just a heads up. I plan to repost this with changes reflecting the
feedback so far after merge window closes. There were only few minor
style fixes and one bug fixe (GFP_NOFAIL vs. costly high order
allocations). I know people are busy with the merge window now and
I hope that the future post will be a better basis for further
discussion.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: RFC: OOM detection rework v1
@ 2015-11-12 12:44   ` Michal Hocko
  0 siblings, 0 replies; 50+ messages in thread
From: Michal Hocko @ 2015-11-12 12:44 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML

Just a heads up. I plan to repost this with changes reflecting the
feedback so far after merge window closes. There were only few minor
style fixes and one bug fixe (GFP_NOFAIL vs. costly high order
allocations). I know people are busy with the merge window now and
I hope that the future post will be a better basis for further
discussion.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-12-11 16:16   ` Johannes Weiner
@ 2015-12-14 18:34     ` Michal Hocko
  0 siblings, 0 replies; 50+ messages in thread
From: Michal Hocko @ 2015-12-14 18:34 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki

On Fri 11-12-15 11:16:15, Johannes Weiner wrote:
> On Tue, Dec 01, 2015 at 01:56:45PM +0100, Michal Hocko wrote:
> > From: Michal Hocko <mhocko@suse.com>
[...]
> This makes sense to me and the patch looks good. Just a few nitpicks.

Thanks for the review!

> Could you change the word "refactor" in the title? This is not a
> non-functional change.

Sure. I will go with rework.

> > @@ -2984,6 +2984,13 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
> >  	return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
> >  }
> >  
> > +/*
> > + * Number of backoff steps for potentially reclaimable pages if the direct reclaim
> > + * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
> > + * reclaimable memory.
> > + */
> > +#define MAX_STALL_BACKOFF 16
> 
> "stall backoff" is a fairly non-descript and doesn't give a good clue
> at what exactly the variable is going to be doing.

The idea was to reflect that this is a step in which we do a backoff
rather than absolute amount.

> How about MAX_DISCOUNT_RECLAIMABLE?

this would indicate an absolute value to me. What about MAX_RECLAIM_RETRIES?

> > @@ -3155,13 +3165,53 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >  	if (gfp_mask & __GFP_NORETRY)
> >  		goto noretry;
> >  
> > -	/* Keep reclaiming pages as long as there is reasonable progress */
> > +	/*
> > +	 * Do not retry high order allocations unless they are __GFP_REPEAT
> > +	 * and even then do not retry endlessly unless explicitly told so
> > +	 */
> >  	pages_reclaimed += did_some_progress;
> > -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> > -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> > -		/* Wait for some write requests to complete then retry */
> > -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> > -		goto retry;
> > +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> > +		if (!(gfp_mask & __GFP_NOFAIL) &&
> > +		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
> > +			goto noretry;
> > +
> > +		if (did_some_progress)
> > +			goto retry;
> > +	}
> 
> I'm a bit bothered by this change as goto noretry is not the inverse
> of not doing goto retry: goto noretry jumps over _may_oom().
> 
> Of course, _may_oom() would filter a higher-order allocation anyway,
> and we could say that it's such a fundamental concept that will never
> change in the kernel that it's not a problem to repeat this clause
> here. But you could probably say the same thing about not invoking OOM
> for < ZONE_NORMAL, for !__GFP_FS, for __GFP_THISNODE, and I'm a bit
> wary of these things spreading out of _may_oom() again after I just
> put effort into consolidating all the OOM clauses in there.

You are right. Our OOM rules are complex already and any partial rules
outside of _may_oom are adding to the confusion even more.

> It should be possible to keep the original branch and then nest the
> decaying retry logic in there.
> 
> > +	/*
> > +	 * Be optimistic and consider all pages on reclaimable LRUs as usable
> > +	 * but make sure we converge to OOM if we cannot make any progress after
> > +	 * multiple consecutive failed attempts.
> > +	 */
> > +	if (did_some_progress)
> > +		stall_backoff = 0;
> > +	else
> > +		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
> 
> The rest of the backoff logic would be nasty to shift out by another
> tab, but it could easily live in its own function.

Yeah, I'll go this way. It looks much better this way!

[...]
> > +	/*
> > +	 * Keep reclaiming pages while there is a chance this will lead somewhere.
> > +	 * If none of the target zones can satisfy our allocation request even
> > +	 * if all reclaimable pages are considered then we are screwed and have
> > +	 * to go OOM.
> > +	 */
> > +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> > +		unsigned long free = zone_page_state_snapshot(zone, NR_FREE_PAGES);
> > +		unsigned long target;
> > +
> > +		target = zone_reclaimable_pages(zone);
> > +		target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
> > +		target += free;
> 
> target is also a little non-descript. Maybe available?
> 
> 		available += zone_reclaimable_pages(zone);
> 		available -= DIV_ROUND_UP(discount_reclaimable * available,
> 					  MAX_DISCOUNT_RECLAIMABLE);
> 		available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
> 
> But yeah, this is mostly bikeshed territory now.

Thanks for the feeback. Here is the cumulative diff after all my current
changes.
---
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e267faad4649..f77e283fb8c6 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2984,6 +2984,75 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
 	return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
 }
 
+/*
+ * Maximum number of reclaim retries without any progress before OOM killer
+ * is consider as the only way to move forward.
+ */
+#define MAX_RECLAIM_RETRIES 16
+
+/*
+ * Checks whether it makes sense to retry the reclaim to make a forward progress
+ * for the given allocation request.
+ * The reclaim feedback represented by did_some_progress (any progress during
+ * the last reclaim round), pages_reclaimed (cumulative number of reclaimed
+ * pages) and no_progress_loops (number of reclaim rounds without any progress
+ * in a row) is considered as well as the reclaimable pages on the applicable
+ * zone list (with a backoff mechanism which is a function of no_progress_loops).
+ *
+ * Returns true if a retry is viable or false to enter the oom path.
+ */
+static inline bool
+should_reclaim_retry(gfp_t gfp_mask, unsigned order,
+		     struct alloc_context *ac, int alloc_flags,
+		     bool did_some_progress, unsigned long pages_reclaimed,
+		     int no_progress_loops)
+{
+	struct zone *zone;
+	struct zoneref *z;
+
+	/*
+	 * Make sure we converge to OOM if we cannot make any progress
+	 * several times in the row.
+	 */
+	if (no_progress_loops > MAX_RECLAIM_RETRIES)
+		return false;
+
+	/* Do not retry high order allocations unless they are __GFP_REPEAT */
+	if (order > PAGE_ALLOC_COSTLY_ORDER) {
+		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
+			return false;
+
+		if (did_some_progress)
+			return true;
+	}
+
+	/*
+	 * Keep reclaiming pages while there is a chance this will lead somewhere.
+	 * If none of the target zones can satisfy our allocation request even
+	 * if all reclaimable pages are considered then we are screwed and have
+	 * to go OOM.
+	 */
+	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
+		unsigned long available;
+
+		available = zone_reclaimable_pages(zone);
+		available -= DIV_ROUND_UP(no_progress_loops * available, MAX_RECLAIM_RETRIES);
+		available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
+
+		/*
+		 * Would the allocation succeed if we reclaimed the whole available?
+		 */
+		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
+				ac->high_zoneidx, alloc_flags, available)) {
+			/* Wait for some write requests to complete then retry */
+			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
+			return true;
+		}
+	}
+
+	return false;
+}
+
 static inline struct page *
 __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 						struct alloc_context *ac)
@@ -2996,6 +3065,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	enum migrate_mode migration_mode = MIGRATE_ASYNC;
 	bool deferred_compaction = false;
 	int contended_compaction = COMPACT_CONTENDED_NONE;
+	int no_progress_loops = 0;
 
 	/*
 	 * In the slowpath, we sanity check order to avoid ever trying to
@@ -3155,23 +3225,28 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	if (gfp_mask & __GFP_NORETRY)
 		goto noretry;
 
-	/* Keep reclaiming pages as long as there is reasonable progress */
-	pages_reclaimed += did_some_progress;
-	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
-	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
-		/* Wait for some write requests to complete then retry */
-		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
-		goto retry;
+	if (did_some_progress) {
+		no_progress_loops = 0;
+		pages_reclaimed += did_some_progress;
+	} else {
+		no_progress_loops++;
 	}
 
+	if (should_reclaim_retry(gfp_mask, order, ac, alloc_flags,
+				 did_some_progress > 0, pages_reclaimed,
+				 no_progress_loops))
+		goto retry;
+
 	/* Reclaim has failed us, start killing things */
 	page = __alloc_pages_may_oom(gfp_mask, order, ac, &did_some_progress);
 	if (page)
 		goto got_pg;
 
 	/* Retry as long as the OOM killer is making progress */
-	if (did_some_progress)
+	if (did_some_progress) {
+		no_progress_loops = 0;
 		goto retry;
+	}
 
 noretry:
 	/*
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-12-01 12:56 ` [RFC 1/3] mm, oom: refactor oom detection Michal Hocko
@ 2015-12-11 16:16   ` Johannes Weiner
  2015-12-14 18:34     ` Michal Hocko
  0 siblings, 1 reply; 50+ messages in thread
From: Johannes Weiner @ 2015-12-11 16:16 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
	Michal Hocko

On Tue, Dec 01, 2015 at 01:56:45PM +0100, Michal Hocko wrote:
> From: Michal Hocko <mhocko@suse.com>
> 
> __alloc_pages_slowpath has traditionally relied on the direct reclaim
> and did_some_progress as an indicator that it makes sense to retry
> allocation rather than declaring OOM. shrink_zones had to rely on
> zone_reclaimable if shrink_zone didn't make any progress to prevent
> from a premature OOM killer invocation - the LRU might be full of dirty
> or writeback pages and direct reclaim cannot clean those up.
> 
> zone_reclaimable allows to rescan the reclaimable lists several
> times and restart if a page is freed. This is really subtle behavior
> and it might lead to a livelock when a single freed page keeps allocator
> looping but the current task will not be able to allocate that single
> page. OOM killer would be more appropriate than looping without any
> progress for unbounded amount of time.
> 
> This patch changes OOM detection logic and pulls it out from shrink_zone
> which is too low to be appropriate for any high level decisions such as OOM
> which is per zonelist property. It is __alloc_pages_slowpath which knows
> how many attempts have been done and what was the progress so far
> therefore it is more appropriate to implement this logic.
> 
> The new heuristic tries to be more deterministic and easier to follow.
> It builds on an assumption that retrying makes sense only if the
> currently reclaimable memory + free pages would allow the current
> allocation request to succeed (as per __zone_watermark_ok) at least for
> one zone in the usable zonelist.
> 
> This alone wouldn't be sufficient, though, because the writeback might
> get stuck and reclaimable pages might be pinned for a really long time
> or even depend on the current allocation context. Therefore there is a
> feedback mechanism implemented which reduces the reclaim target after
> each reclaim round without any progress. This means that we should
> eventually converge to only NR_FREE_PAGES as the target and fail on the
> wmark check and proceed to OOM. The backoff is simple and linear with
> 1/16 of the reclaimable pages for each round without any progress. We
> are optimistic and reset counter for successful reclaim rounds.
> 
> Costly high order pages mostly preserve their semantic and those without
> __GFP_REPEAT fail right away while those which have the flag set will
> back off after the amount of reclaimable pages reaches equivalent of the
> requested order. The only difference is that if there was no progress
> during the reclaim we rely on zone watermark check. This is more logical
> thing to do than previous 1<<order attempts which were a result of
> zone_reclaimable faking the progress.
> 
> [rientjes@google.com: use zone_page_state_snapshot for NR_FREE_PAGES]
> [rientjes@google.com: shrink_zones doesn't need to return anything]
> Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
> Signed-off-by: Michal Hocko <mhocko@suse.com>

This makes sense to me and the patch looks good. Just a few nitpicks.

Could you change the word "refactor" in the title? This is not a
non-functional change.

> @@ -2984,6 +2984,13 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
>  	return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
>  }
>  
> +/*
> + * Number of backoff steps for potentially reclaimable pages if the direct reclaim
> + * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
> + * reclaimable memory.
> + */
> +#define MAX_STALL_BACKOFF 16

"stall backoff" is a fairly non-descript and doesn't give a good clue
at what exactly the variable is going to be doing.

How about MAX_DISCOUNT_RECLAIMABLE?

> @@ -3155,13 +3165,53 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	if (gfp_mask & __GFP_NORETRY)
>  		goto noretry;
>  
> -	/* Keep reclaiming pages as long as there is reasonable progress */
> +	/*
> +	 * Do not retry high order allocations unless they are __GFP_REPEAT
> +	 * and even then do not retry endlessly unless explicitly told so
> +	 */
>  	pages_reclaimed += did_some_progress;
> -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> -		/* Wait for some write requests to complete then retry */
> -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> -		goto retry;
> +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> +		if (!(gfp_mask & __GFP_NOFAIL) &&
> +		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
> +			goto noretry;
> +
> +		if (did_some_progress)
> +			goto retry;
> +	}

I'm a bit bothered by this change as goto noretry is not the inverse
of not doing goto retry: goto noretry jumps over _may_oom().

Of course, _may_oom() would filter a higher-order allocation anyway,
and we could say that it's such a fundamental concept that will never
change in the kernel that it's not a problem to repeat this clause
here. But you could probably say the same thing about not invoking OOM
for < ZONE_NORMAL, for !__GFP_FS, for __GFP_THISNODE, and I'm a bit
wary of these things spreading out of _may_oom() again after I just
put effort into consolidating all the OOM clauses in there.

It should be possible to keep the original branch and then nest the
decaying retry logic in there.

> +	/*
> +	 * Be optimistic and consider all pages on reclaimable LRUs as usable
> +	 * but make sure we converge to OOM if we cannot make any progress after
> +	 * multiple consecutive failed attempts.
> +	 */
> +	if (did_some_progress)
> +		stall_backoff = 0;
> +	else
> +		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);

The rest of the backoff logic would be nasty to shift out by another
tab, but it could easily live in its own function.

In fact, the longer I think about it, it would probably be better for
__alloc_pages_slowpath anyway as that zonelist walk looks a bit too
low-level and unwieldy for the highlevel control flow function.

The outer control flow could look something like this:

	/* Do not loop if specifically requested */
	if (gfp_mask & __GFP_NORETRY)
		goto noretry;

	/* Keep reclaiming pages as long as there is reasonable progress */
	if (did_some_progress) {
		pages_reclaimed += did_some_progress;
		no_progress_loops = 0;
	} else {
		no_progress_loops++;
	}
	if (should_retry_reclaim(gfp_mask, order, ac, did_some_progress,
				 no_progress_loops, pages_reclaimed)) {
		/* Wait for some write requests to complete then retry */
		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
		goto retry;
	}

	/* Reclaim has failed us, start killing things */
	page = __alloc_pages_may_oom(gfp_mask, order, ac, &did_some_progress);
	if (page)
		goto got_pg;

	/* Retry as long as the OOM killer is making progress */
	if (did_some_progress) {
		no_progress_loops = 0;
		goto retry;
	}

noretry:

> +	/*
> +	 * Keep reclaiming pages while there is a chance this will lead somewhere.
> +	 * If none of the target zones can satisfy our allocation request even
> +	 * if all reclaimable pages are considered then we are screwed and have
> +	 * to go OOM.
> +	 */
> +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> +		unsigned long free = zone_page_state_snapshot(zone, NR_FREE_PAGES);
> +		unsigned long target;
> +
> +		target = zone_reclaimable_pages(zone);
> +		target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
> +		target += free;

target is also a little non-descript. Maybe available?

		available += zone_reclaimable_pages(zone);
		available -= DIV_ROUND_UP(discount_reclaimable * available,
					  MAX_DISCOUNT_RECLAIMABLE);
		available += zone_page_state_snapshot(zone, NR_FREE_PAGES);

But yeah, this is mostly bikeshed territory now.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [RFC 1/3] mm, oom: refactor oom detection
  2015-12-01 12:56 [RFC 0/3] OOM detection rework v3 Michal Hocko
@ 2015-12-01 12:56 ` Michal Hocko
  2015-12-11 16:16   ` Johannes Weiner
  0 siblings, 1 reply; 50+ messages in thread
From: Michal Hocko @ 2015-12-01 12:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
	Michal Hocko

From: Michal Hocko <mhocko@suse.com>

__alloc_pages_slowpath has traditionally relied on the direct reclaim
and did_some_progress as an indicator that it makes sense to retry
allocation rather than declaring OOM. shrink_zones had to rely on
zone_reclaimable if shrink_zone didn't make any progress to prevent
from a premature OOM killer invocation - the LRU might be full of dirty
or writeback pages and direct reclaim cannot clean those up.

zone_reclaimable allows to rescan the reclaimable lists several
times and restart if a page is freed. This is really subtle behavior
and it might lead to a livelock when a single freed page keeps allocator
looping but the current task will not be able to allocate that single
page. OOM killer would be more appropriate than looping without any
progress for unbounded amount of time.

This patch changes OOM detection logic and pulls it out from shrink_zone
which is too low to be appropriate for any high level decisions such as OOM
which is per zonelist property. It is __alloc_pages_slowpath which knows
how many attempts have been done and what was the progress so far
therefore it is more appropriate to implement this logic.

The new heuristic tries to be more deterministic and easier to follow.
It builds on an assumption that retrying makes sense only if the
currently reclaimable memory + free pages would allow the current
allocation request to succeed (as per __zone_watermark_ok) at least for
one zone in the usable zonelist.

This alone wouldn't be sufficient, though, because the writeback might
get stuck and reclaimable pages might be pinned for a really long time
or even depend on the current allocation context. Therefore there is a
feedback mechanism implemented which reduces the reclaim target after
each reclaim round without any progress. This means that we should
eventually converge to only NR_FREE_PAGES as the target and fail on the
wmark check and proceed to OOM. The backoff is simple and linear with
1/16 of the reclaimable pages for each round without any progress. We
are optimistic and reset counter for successful reclaim rounds.

Costly high order pages mostly preserve their semantic and those without
__GFP_REPEAT fail right away while those which have the flag set will
back off after the amount of reclaimable pages reaches equivalent of the
requested order. The only difference is that if there was no progress
during the reclaim we rely on zone watermark check. This is more logical
thing to do than previous 1<<order attempts which were a result of
zone_reclaimable faking the progress.

[rientjes@google.com: use zone_page_state_snapshot for NR_FREE_PAGES]
[rientjes@google.com: shrink_zones doesn't need to return anything]
Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 include/linux/swap.h |  1 +
 mm/page_alloc.c      | 66 ++++++++++++++++++++++++++++++++++++++++++++++------
 mm/vmscan.c          | 25 ++++----------------
 3 files changed, 64 insertions(+), 28 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 457181844b6e..738ae2206635 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -316,6 +316,7 @@ extern void lru_cache_add_active_or_unevictable(struct page *page,
 						struct vm_area_struct *vma);
 
 /* linux/mm/vmscan.c */
+extern unsigned long zone_reclaimable_pages(struct zone *zone);
 extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
 					gfp_t gfp_mask, nodemask_t *mask);
 extern int __isolate_lru_page(struct page *page, isolate_mode_t mode);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e267faad4649..af221067de6a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2984,6 +2984,13 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
 	return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
 }
 
+/*
+ * Number of backoff steps for potentially reclaimable pages if the direct reclaim
+ * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
+ * reclaimable memory.
+ */
+#define MAX_STALL_BACKOFF 16
+
 static inline struct page *
 __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 						struct alloc_context *ac)
@@ -2996,6 +3003,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	enum migrate_mode migration_mode = MIGRATE_ASYNC;
 	bool deferred_compaction = false;
 	int contended_compaction = COMPACT_CONTENDED_NONE;
+	struct zone *zone;
+	struct zoneref *z;
+	int stall_backoff = 0;
 
 	/*
 	 * In the slowpath, we sanity check order to avoid ever trying to
@@ -3155,13 +3165,53 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	if (gfp_mask & __GFP_NORETRY)
 		goto noretry;
 
-	/* Keep reclaiming pages as long as there is reasonable progress */
+	/*
+	 * Do not retry high order allocations unless they are __GFP_REPEAT
+	 * and even then do not retry endlessly unless explicitly told so
+	 */
 	pages_reclaimed += did_some_progress;
-	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
-	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
-		/* Wait for some write requests to complete then retry */
-		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
-		goto retry;
+	if (order > PAGE_ALLOC_COSTLY_ORDER) {
+		if (!(gfp_mask & __GFP_NOFAIL) &&
+		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
+			goto noretry;
+
+		if (did_some_progress)
+			goto retry;
+	}
+
+	/*
+	 * Be optimistic and consider all pages on reclaimable LRUs as usable
+	 * but make sure we converge to OOM if we cannot make any progress after
+	 * multiple consecutive failed attempts.
+	 */
+	if (did_some_progress)
+		stall_backoff = 0;
+	else
+		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
+
+	/*
+	 * Keep reclaiming pages while there is a chance this will lead somewhere.
+	 * If none of the target zones can satisfy our allocation request even
+	 * if all reclaimable pages are considered then we are screwed and have
+	 * to go OOM.
+	 */
+	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
+		unsigned long free = zone_page_state_snapshot(zone, NR_FREE_PAGES);
+		unsigned long target;
+
+		target = zone_reclaimable_pages(zone);
+		target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
+		target += free;
+
+		/*
+		 * Would the allocation succeed if we reclaimed the whole target?
+		 */
+		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
+				ac->high_zoneidx, alloc_flags, target)) {
+			/* Wait for some write requests to complete then retry */
+			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
+			goto retry;
+		}
 	}
 
 	/* Reclaim has failed us, start killing things */
@@ -3170,8 +3220,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		goto got_pg;
 
 	/* Retry as long as the OOM killer is making progress */
-	if (did_some_progress)
+	if (did_some_progress) {
+		stall_backoff = 0;
 		goto retry;
+	}
 
 noretry:
 	/*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 4589cfdbe405..489212252cd6 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -192,7 +192,7 @@ static bool sane_reclaim(struct scan_control *sc)
 }
 #endif
 
-static unsigned long zone_reclaimable_pages(struct zone *zone)
+unsigned long zone_reclaimable_pages(struct zone *zone)
 {
 	unsigned long nr;
 
@@ -2516,10 +2516,8 @@ static inline bool compaction_ready(struct zone *zone, int order)
  *
  * If a zone is deemed to be full of pinned pages then just give it a light
  * scan then give up on it.
- *
- * Returns true if a zone was reclaimable.
  */
-static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
+static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 {
 	struct zoneref *z;
 	struct zone *zone;
@@ -2527,7 +2525,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 	unsigned long nr_soft_scanned;
 	gfp_t orig_mask;
 	enum zone_type requested_highidx = gfp_zone(sc->gfp_mask);
-	bool reclaimable = false;
 
 	/*
 	 * If the number of buffer_heads in the machine exceeds the maximum
@@ -2592,17 +2589,10 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 						&nr_soft_scanned);
 			sc->nr_reclaimed += nr_soft_reclaimed;
 			sc->nr_scanned += nr_soft_scanned;
-			if (nr_soft_reclaimed)
-				reclaimable = true;
 			/* need some check for avoid more shrink_zone() */
 		}
 
-		if (shrink_zone(zone, sc, zone_idx(zone) == classzone_idx))
-			reclaimable = true;
-
-		if (global_reclaim(sc) &&
-		    !reclaimable && zone_reclaimable(zone))
-			reclaimable = true;
+		shrink_zone(zone, sc, zone_idx(zone));
 	}
 
 	/*
@@ -2610,8 +2600,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 	 * promoted it to __GFP_HIGHMEM.
 	 */
 	sc->gfp_mask = orig_mask;
-
-	return reclaimable;
 }
 
 /*
@@ -2636,7 +2624,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 	int initial_priority = sc->priority;
 	unsigned long total_scanned = 0;
 	unsigned long writeback_threshold;
-	bool zones_reclaimable;
 retry:
 	delayacct_freepages_start();
 
@@ -2647,7 +2634,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup,
 				sc->priority);
 		sc->nr_scanned = 0;
-		zones_reclaimable = shrink_zones(zonelist, sc);
+		shrink_zones(zonelist, sc);
 
 		total_scanned += sc->nr_scanned;
 		if (sc->nr_reclaimed >= sc->nr_to_reclaim)
@@ -2694,10 +2681,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		goto retry;
 	}
 
-	/* Any of the zones still reclaimable?  Don't OOM. */
-	if (zones_reclaimable)
-		return 1;
-
 	return 0;
 }
 
-- 
2.6.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-11-23 18:24           ` Johannes Weiner
@ 2015-11-24 10:03             ` Michal Hocko
  0 siblings, 0 replies; 50+ messages in thread
From: Michal Hocko @ 2015-11-24 10:03 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: David Rientjes, linux-mm, Andrew Morton, Linus Torvalds,
	Mel Gorman, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki

On Mon 23-11-15 13:24:47, Johannes Weiner wrote:
> On Mon, Nov 23, 2015 at 10:41:06AM +0100, Michal Hocko wrote:
> > @@ -3197,8 +3197,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >  		unsigned long target;
> >  
> >  		reclaimable = zone_reclaimable_pages(zone) +
> > -			      zone_page_state(zone, NR_ISOLATED_FILE) +
> > -			      zone_page_state(zone, NR_ISOLATED_ANON);
> > +			      zone_page_state(zone, NR_ISOLATED_FILE);
> > +		if (get_nr_swap_pages() > 0)
> > +			reclaimable += zone_page_state(zone, NR_ISOLATED_ANON);
> 
> Can you include the isolated counts in zone_reclaimable_pages()?

OK, this makes sense. NR_ISOLATED_* should be a temporary condition
after which pages either get back to the LRU or they get migrated to a
different location thus freed.

I will spin this intot a separate patch.

Thanks!
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-11-23  9:41         ` Michal Hocko
@ 2015-11-23 18:24           ` Johannes Weiner
  2015-11-24 10:03             ` Michal Hocko
  0 siblings, 1 reply; 50+ messages in thread
From: Johannes Weiner @ 2015-11-23 18:24 UTC (permalink / raw)
  To: Michal Hocko
  Cc: David Rientjes, linux-mm, Andrew Morton, Linus Torvalds,
	Mel Gorman, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki

On Mon, Nov 23, 2015 at 10:41:06AM +0100, Michal Hocko wrote:
> @@ -3197,8 +3197,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  		unsigned long target;
>  
>  		reclaimable = zone_reclaimable_pages(zone) +
> -			      zone_page_state(zone, NR_ISOLATED_FILE) +
> -			      zone_page_state(zone, NR_ISOLATED_ANON);
> +			      zone_page_state(zone, NR_ISOLATED_FILE);
> +		if (get_nr_swap_pages() > 0)
> +			reclaimable += zone_page_state(zone, NR_ISOLATED_ANON);

Can you include the isolated counts in zone_reclaimable_pages()?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-11-20 23:27       ` David Rientjes
@ 2015-11-23  9:41         ` Michal Hocko
  2015-11-23 18:24           ` Johannes Weiner
  0 siblings, 1 reply; 50+ messages in thread
From: Michal Hocko @ 2015-11-23  9:41 UTC (permalink / raw)
  To: David Rientjes
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki

On Fri 20-11-15 15:27:39, David Rientjes wrote:
> On Fri, 20 Nov 2015, Michal Hocko wrote:
> 
> > > > +		unsigned long reclaimable;
> > > > +		unsigned long target;
> > > > +
> > > > +		reclaimable = zone_reclaimable_pages(zone) +
> > > > +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> > > > +			      zone_page_state(zone, NR_ISOLATED_ANON);
> > > 
> > > Does NR_ISOLATED_ANON mean anything relevant here in swapless 
> > > environments?
> > 
> > It should be 0 so I didn't bother to check for swapless configuration.
> > 
> 
> I'm not sure I understand your point, memory compaction certainly 
> increments NR_ISOLATED_ANON and that would be considered unreclaimable in 
> a swapless environment, correct?

My bad. I have completely missed that compaction/migration is updating
the counter as well. I would expect that the number shouldn't too large
to matter but I guess it will be better to simply exclude it. I will
fold this to the first patch.

Thanks
---
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 54476e71b572..7d885d7fae86 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3197,8 +3197,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		unsigned long target;
 
 		reclaimable = zone_reclaimable_pages(zone) +
-			      zone_page_state(zone, NR_ISOLATED_FILE) +
-			      zone_page_state(zone, NR_ISOLATED_ANON);
+			      zone_page_state(zone, NR_ISOLATED_FILE);
+		if (get_nr_swap_pages() > 0)
+			reclaimable += zone_page_state(zone, NR_ISOLATED_ANON);
+
 		target = reclaimable;
 		target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
 		target += free;

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-11-20  9:06     ` Michal Hocko
@ 2015-11-20 23:27       ` David Rientjes
  2015-11-23  9:41         ` Michal Hocko
  0 siblings, 1 reply; 50+ messages in thread
From: David Rientjes @ 2015-11-20 23:27 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki

On Fri, 20 Nov 2015, Michal Hocko wrote:

> > > +		unsigned long reclaimable;
> > > +		unsigned long target;
> > > +
> > > +		reclaimable = zone_reclaimable_pages(zone) +
> > > +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> > > +			      zone_page_state(zone, NR_ISOLATED_ANON);
> > 
> > Does NR_ISOLATED_ANON mean anything relevant here in swapless 
> > environments?
> 
> It should be 0 so I didn't bother to check for swapless configuration.
> 

I'm not sure I understand your point, memory compaction certainly 
increments NR_ISOLATED_ANON and that would be considered unreclaimable in 
a swapless environment, correct?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-11-19 23:01   ` David Rientjes
@ 2015-11-20  9:06     ` Michal Hocko
  2015-11-20 23:27       ` David Rientjes
  0 siblings, 1 reply; 50+ messages in thread
From: Michal Hocko @ 2015-11-20  9:06 UTC (permalink / raw)
  To: David Rientjes
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki

On Thu 19-11-15 15:01:38, David Rientjes wrote:
> On Wed, 18 Nov 2015, Michal Hocko wrote:
[...]
> > @@ -3155,13 +3165,57 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >  	if (gfp_mask & __GFP_NORETRY)
> >  		goto noretry;
> >  
> > -	/* Keep reclaiming pages as long as there is reasonable progress */
> > +	/*
> > +	 * Do not retry high order allocations unless they are __GFP_REPEAT
> > +	 * and even then do not retry endlessly unless explicitly told so
> > +	 */
> >  	pages_reclaimed += did_some_progress;
> > -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> > -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> > -		/* Wait for some write requests to complete then retry */
> > -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> > -		goto retry;
> > +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> > +		if (!(gfp_mask & __GFP_NOFAIL) &&
> > +		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
> > +			goto noretry;
> > +
> > +		if (did_some_progress)
> > +			goto retry;
> > +	}
> 
> First of all, thanks very much for attacking this issue!
> 
> I'm concerned that we'll reach stall_backoff == MAX_STALL_BACKOFF too 
> quickly if the wait_iff_congested() is removed.  While not immediately 
> being available for reclaim, this has at least partially stalled in the 
> past which may have resulted in external memory freeing.  I'm wondering if 
> it would make sense to keep if nothing more than to avoid an immediate 
> retry.

My experiments show that wait_iff_congested slept only very rarely if at
all (even for loads with a heavy IO). There might be other loads where
it really hits, though. If you have any of those I would be more than
happy if you could share them or at least test them with these patches.

If you are concerned about removed wait_iff_congested for costly
__GFP_REPEAT allocations then the follow up patch changes that to use a
common sleep&retry logic.

> > +
> > +	/*
> > +	 * Be optimistic and consider all pages on reclaimable LRUs as usable
> > +	 * but make sure we converge to OOM if we cannot make any progress after
> > +	 * multiple consecutive failed attempts.
> > +	 */
> > +	if (did_some_progress)
> > +		stall_backoff = 0;
> > +	else
> > +		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
> > +
> > +	/*
> > +	 * Keep reclaiming pages while there is a chance this will lead somewhere.
> > +	 * If none of the target zones can satisfy our allocation request even
> > +	 * if all reclaimable pages are considered then we are screwed and have
> > +	 * to go OOM.
> > +	 */
> > +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> > +		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
> 
> This is concerning, I would think that you would want to use 
> zone_page_state_snapshot() at the very list for when 
> stall_backoff == MAX_STALL_BACKOFF.

OK, this is a fair point. In an extreme case where vmstat counters are
way outdated we might loop endlessly. I will just use _snapshot variant.
The overhead shouldn't be a concern as this is a slow path.

Other counters are using backoff so they do not need this special
treatment.

> > +		unsigned long reclaimable;
> > +		unsigned long target;
> > +
> > +		reclaimable = zone_reclaimable_pages(zone) +
> > +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> > +			      zone_page_state(zone, NR_ISOLATED_ANON);
> 
> Does NR_ISOLATED_ANON mean anything relevant here in swapless 
> environments?

It should be 0 so I didn't bother to check for swapless configuration.

[...]

> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index a4507ecaefbf..9060a71e5a90 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -192,7 +192,7 @@ static bool sane_reclaim(struct scan_control *sc)
> >  }
> >  #endif
> >  
> > -static unsigned long zone_reclaimable_pages(struct zone *zone)
> > +unsigned long zone_reclaimable_pages(struct zone *zone)
> >  {
> >  	unsigned long nr;
> >  
> > @@ -2594,10 +2594,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> >  
> >  		if (shrink_zone(zone, sc, zone_idx(zone) == classzone_idx))
> >  			reclaimable = true;
> > -
> > -		if (global_reclaim(sc) &&
> > -		    !reclaimable && zone_reclaimable(zone))
> > -			reclaimable = true;
> >  	}
> >  
> >  	/*
> 
> It's possible to just make shrink_zones() void and drop the reclaimable 
> variable.

True, will do that.
 
> Otherwise looks good!

Thanks for the review!

Here is what I will fold it to the original patch
---
commit b8687e8406f4ec1b194b259acaea115711d319cd
Author: Michal Hocko <mhocko@suse.com>
Date:   Fri Nov 20 10:04:22 2015 +0100

    fold me: mm, oom: refactor oom detection
    
    [rientjes@google.com: use zone_page_state_snapshot for NR_FREE_PAGES]
    [rientjes@google.com: shrink_zones doesn't need to return anything]

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 999c8cdbe7b5..54476e71b572 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3192,7 +3192,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	 * to go OOM.
 	 */
 	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
-		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
+		unsigned long free = zone_page_state_snapshot(zone, NR_FREE_PAGES);
 		unsigned long reclaimable;
 		unsigned long target;
 
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9060a71e5a90..784e2b28d2fb 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2511,10 +2511,8 @@ static inline bool compaction_ready(struct zone *zone, int order)
  *
  * If a zone is deemed to be full of pinned pages then just give it a light
  * scan then give up on it.
- *
- * Returns true if a zone was reclaimable.
  */
-static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
+static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 {
 	struct zoneref *z;
 	struct zone *zone;
@@ -2522,7 +2520,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 	unsigned long nr_soft_scanned;
 	gfp_t orig_mask;
 	enum zone_type requested_highidx = gfp_zone(sc->gfp_mask);
-	bool reclaimable = false;
 
 	/*
 	 * If the number of buffer_heads in the machine exceeds the maximum
@@ -2587,13 +2584,10 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 						&nr_soft_scanned);
 			sc->nr_reclaimed += nr_soft_reclaimed;
 			sc->nr_scanned += nr_soft_scanned;
-			if (nr_soft_reclaimed)
-				reclaimable = true;
 			/* need some check for avoid more shrink_zone() */
 		}
 
-		if (shrink_zone(zone, sc, zone_idx(zone) == classzone_idx))
-			reclaimable = true;
+		shrink_zone(zone, sc, zone_idx(zone));
 	}
 
 	/*
@@ -2601,8 +2595,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 	 * promoted it to __GFP_HIGHMEM.
 	 */
 	sc->gfp_mask = orig_mask;
-
-	return reclaimable;
 }
 
 /*
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-11-18 13:03 ` [RFC 1/3] mm, oom: refactor oom detection Michal Hocko
@ 2015-11-19 23:01   ` David Rientjes
  2015-11-20  9:06     ` Michal Hocko
  0 siblings, 1 reply; 50+ messages in thread
From: David Rientjes @ 2015-11-19 23:01 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
	Michal Hocko

On Wed, 18 Nov 2015, Michal Hocko wrote:

> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 8034909faad2..020c005c5bc0 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2992,6 +2992,13 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
>  	return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
>  }
>  
> +/*
> + * Number of backoff steps for potentially reclaimable pages if the direct reclaim
> + * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
> + * reclaimable memory.
> + */
> +#define MAX_STALL_BACKOFF 16
> +
>  static inline struct page *
>  __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  						struct alloc_context *ac)
> @@ -3004,6 +3011,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	enum migrate_mode migration_mode = MIGRATE_ASYNC;
>  	bool deferred_compaction = false;
>  	int contended_compaction = COMPACT_CONTENDED_NONE;
> +	struct zone *zone;
> +	struct zoneref *z;
> +	int stall_backoff = 0;
>  
>  	/*
>  	 * In the slowpath, we sanity check order to avoid ever trying to
> @@ -3155,13 +3165,57 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	if (gfp_mask & __GFP_NORETRY)
>  		goto noretry;
>  
> -	/* Keep reclaiming pages as long as there is reasonable progress */
> +	/*
> +	 * Do not retry high order allocations unless they are __GFP_REPEAT
> +	 * and even then do not retry endlessly unless explicitly told so
> +	 */
>  	pages_reclaimed += did_some_progress;
> -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> -		/* Wait for some write requests to complete then retry */
> -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> -		goto retry;
> +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> +		if (!(gfp_mask & __GFP_NOFAIL) &&
> +		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
> +			goto noretry;
> +
> +		if (did_some_progress)
> +			goto retry;
> +	}

First of all, thanks very much for attacking this issue!

I'm concerned that we'll reach stall_backoff == MAX_STALL_BACKOFF too 
quickly if the wait_iff_congested() is removed.  While not immediately 
being available for reclaim, this has at least partially stalled in the 
past which may have resulted in external memory freeing.  I'm wondering if 
it would make sense to keep if nothing more than to avoid an immediate 
retry.

> +
> +	/*
> +	 * Be optimistic and consider all pages on reclaimable LRUs as usable
> +	 * but make sure we converge to OOM if we cannot make any progress after
> +	 * multiple consecutive failed attempts.
> +	 */
> +	if (did_some_progress)
> +		stall_backoff = 0;
> +	else
> +		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
> +
> +	/*
> +	 * Keep reclaiming pages while there is a chance this will lead somewhere.
> +	 * If none of the target zones can satisfy our allocation request even
> +	 * if all reclaimable pages are considered then we are screwed and have
> +	 * to go OOM.
> +	 */
> +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> +		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);

This is concerning, I would think that you would want to use 
zone_page_state_snapshot() at the very list for when 
stall_backoff == MAX_STALL_BACKOFF.

> +		unsigned long reclaimable;
> +		unsigned long target;
> +
> +		reclaimable = zone_reclaimable_pages(zone) +
> +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> +			      zone_page_state(zone, NR_ISOLATED_ANON);

Does NR_ISOLATED_ANON mean anything relevant here in swapless 
environments?

> +		target = reclaimable;
> +		target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
> +		target += free;
> +
> +		/*
> +		 * Would the allocation succeed if we reclaimed the whole target?
> +		 */
> +		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> +				ac->high_zoneidx, alloc_flags, target)) {
> +			/* Wait for some write requests to complete then retry */
> +			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> +			goto retry;
> +		}
>  	}
>  
>  	/* Reclaim has failed us, start killing things */
> @@ -3170,8 +3224,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  		goto got_pg;
>  
>  	/* Retry as long as the OOM killer is making progress */
> -	if (did_some_progress)
> +	if (did_some_progress) {
> +		stall_backoff = 0;
>  		goto retry;
> +	}
>  
>  noretry:
>  	/*
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index a4507ecaefbf..9060a71e5a90 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -192,7 +192,7 @@ static bool sane_reclaim(struct scan_control *sc)
>  }
>  #endif
>  
> -static unsigned long zone_reclaimable_pages(struct zone *zone)
> +unsigned long zone_reclaimable_pages(struct zone *zone)
>  {
>  	unsigned long nr;
>  
> @@ -2594,10 +2594,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
>  
>  		if (shrink_zone(zone, sc, zone_idx(zone) == classzone_idx))
>  			reclaimable = true;
> -
> -		if (global_reclaim(sc) &&
> -		    !reclaimable && zone_reclaimable(zone))
> -			reclaimable = true;
>  	}
>  
>  	/*

It's possible to just make shrink_zones() void and drop the reclaimable 
variable.

Otherwise looks good!

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [RFC 1/3] mm, oom: refactor oom detection
  2015-11-18 13:03 [RFC 0/3] OOM detection rework v2 Michal Hocko
@ 2015-11-18 13:03 ` Michal Hocko
  2015-11-19 23:01   ` David Rientjes
  0 siblings, 1 reply; 50+ messages in thread
From: Michal Hocko @ 2015-11-18 13:03 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
	Michal Hocko

From: Michal Hocko <mhocko@suse.com>

__alloc_pages_slowpath has traditionally relied on the direct reclaim
and did_some_progress as an indicator that it makes sense to retry
allocation rather than declaring OOM. shrink_zones had to rely on
zone_reclaimable if shrink_zone didn't make any progress to prevent
from a premature OOM killer invocation - the LRU might be full of dirty
or writeback pages and direct reclaim cannot clean those up.

zone_reclaimable allows to rescan the reclaimable lists several
times and restart if a page is freed. This is really subtle behavior
and it might lead to a livelock when a single freed page keeps allocator
looping but the current task will not be able to allocate that single
page. OOM killer would be more appropriate than looping without any
progress for unbounded amount of time.

This patch changes OOM detection logic and pulls it out from shrink_zone
which is too low to be appropriate for any high level decisions such as OOM
which is per zonelist property. It is __alloc_pages_slowpath which knows
how many attempts have been done and what was the progress so far
therefore it is more appropriate to implement this logic.

The new heuristic tries to be more deterministic and easier to follow.
It builds on an assumption that retrying makes sense only if the
currently reclaimable memory + free pages would allow the current
allocation request to succeed (as per __zone_watermark_ok) at least for
one zone in the usable zonelist.

This alone wouldn't be sufficient, though, because the writeback might
get stuck and reclaimable pages might be pinned for a really long time
or even depend on the current allocation context. Therefore there is a
feedback mechanism implemented which reduces the reclaim target after
each reclaim round without any progress. This means that we should
eventually converge to only NR_FREE_PAGES as the target and fail on the
wmark check and proceed to OOM. The backoff is simple and linear with
1/16 of the reclaimable pages for each round without any progress. We
are optimistic and reset counter for successful reclaim rounds.

Costly high order pages mostly preserve their semantic and those without
__GFP_REPEAT fail right away while those which have the flag set will
back off after the amount of reclaimable pages reaches equivalent of the
requested order. The only difference is that if there was no progress
during the reclaim we rely on zone watermark check. This is more logical
thing to do than previous 1<<order attempts which were a result of
zone_reclaimable faking the progress.

Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 include/linux/swap.h |  1 +
 mm/page_alloc.c      | 70 ++++++++++++++++++++++++++++++++++++++++++++++------
 mm/vmscan.c          | 13 ++--------
 3 files changed, 66 insertions(+), 18 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 457181844b6e..738ae2206635 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -316,6 +316,7 @@ extern void lru_cache_add_active_or_unevictable(struct page *page,
 						struct vm_area_struct *vma);
 
 /* linux/mm/vmscan.c */
+extern unsigned long zone_reclaimable_pages(struct zone *zone);
 extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
 					gfp_t gfp_mask, nodemask_t *mask);
 extern int __isolate_lru_page(struct page *page, isolate_mode_t mode);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8034909faad2..020c005c5bc0 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2992,6 +2992,13 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
 	return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
 }
 
+/*
+ * Number of backoff steps for potentially reclaimable pages if the direct reclaim
+ * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
+ * reclaimable memory.
+ */
+#define MAX_STALL_BACKOFF 16
+
 static inline struct page *
 __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 						struct alloc_context *ac)
@@ -3004,6 +3011,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	enum migrate_mode migration_mode = MIGRATE_ASYNC;
 	bool deferred_compaction = false;
 	int contended_compaction = COMPACT_CONTENDED_NONE;
+	struct zone *zone;
+	struct zoneref *z;
+	int stall_backoff = 0;
 
 	/*
 	 * In the slowpath, we sanity check order to avoid ever trying to
@@ -3155,13 +3165,57 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	if (gfp_mask & __GFP_NORETRY)
 		goto noretry;
 
-	/* Keep reclaiming pages as long as there is reasonable progress */
+	/*
+	 * Do not retry high order allocations unless they are __GFP_REPEAT
+	 * and even then do not retry endlessly unless explicitly told so
+	 */
 	pages_reclaimed += did_some_progress;
-	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
-	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
-		/* Wait for some write requests to complete then retry */
-		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
-		goto retry;
+	if (order > PAGE_ALLOC_COSTLY_ORDER) {
+		if (!(gfp_mask & __GFP_NOFAIL) &&
+		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
+			goto noretry;
+
+		if (did_some_progress)
+			goto retry;
+	}
+
+	/*
+	 * Be optimistic and consider all pages on reclaimable LRUs as usable
+	 * but make sure we converge to OOM if we cannot make any progress after
+	 * multiple consecutive failed attempts.
+	 */
+	if (did_some_progress)
+		stall_backoff = 0;
+	else
+		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
+
+	/*
+	 * Keep reclaiming pages while there is a chance this will lead somewhere.
+	 * If none of the target zones can satisfy our allocation request even
+	 * if all reclaimable pages are considered then we are screwed and have
+	 * to go OOM.
+	 */
+	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
+		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
+		unsigned long reclaimable;
+		unsigned long target;
+
+		reclaimable = zone_reclaimable_pages(zone) +
+			      zone_page_state(zone, NR_ISOLATED_FILE) +
+			      zone_page_state(zone, NR_ISOLATED_ANON);
+		target = reclaimable;
+		target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
+		target += free;
+
+		/*
+		 * Would the allocation succeed if we reclaimed the whole target?
+		 */
+		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
+				ac->high_zoneidx, alloc_flags, target)) {
+			/* Wait for some write requests to complete then retry */
+			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
+			goto retry;
+		}
 	}
 
 	/* Reclaim has failed us, start killing things */
@@ -3170,8 +3224,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		goto got_pg;
 
 	/* Retry as long as the OOM killer is making progress */
-	if (did_some_progress)
+	if (did_some_progress) {
+		stall_backoff = 0;
 		goto retry;
+	}
 
 noretry:
 	/*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index a4507ecaefbf..9060a71e5a90 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -192,7 +192,7 @@ static bool sane_reclaim(struct scan_control *sc)
 }
 #endif
 
-static unsigned long zone_reclaimable_pages(struct zone *zone)
+unsigned long zone_reclaimable_pages(struct zone *zone)
 {
 	unsigned long nr;
 
@@ -2594,10 +2594,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 
 		if (shrink_zone(zone, sc, zone_idx(zone) == classzone_idx))
 			reclaimable = true;
-
-		if (global_reclaim(sc) &&
-		    !reclaimable && zone_reclaimable(zone))
-			reclaimable = true;
 	}
 
 	/*
@@ -2631,7 +2627,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 	int initial_priority = sc->priority;
 	unsigned long total_scanned = 0;
 	unsigned long writeback_threshold;
-	bool zones_reclaimable;
 retry:
 	delayacct_freepages_start();
 
@@ -2642,7 +2637,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup,
 				sc->priority);
 		sc->nr_scanned = 0;
-		zones_reclaimable = shrink_zones(zonelist, sc);
+		shrink_zones(zonelist, sc);
 
 		total_scanned += sc->nr_scanned;
 		if (sc->nr_reclaimed >= sc->nr_to_reclaim)
@@ -2689,10 +2684,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		goto retry;
 	}
 
-	/* Any of the zones still reclaimable?  Don't OOM. */
-	if (zones_reclaimable)
-		return 1;
-
 	return 0;
 }
 
-- 
2.6.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 50+ messages in thread

end of thread, other threads:[~2015-12-14 18:35 UTC | newest]

Thread overview: 50+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-29 15:17 RFC: OOM detection rework v1 mhocko
2015-10-29 15:17 ` mhocko
2015-10-29 15:17 ` [RFC 1/3] mm, oom: refactor oom detection mhocko
2015-10-29 15:17   ` mhocko
2015-10-30  4:10   ` Hillf Danton
2015-10-30  4:10     ` Hillf Danton
2015-10-30  8:36     ` Michal Hocko
2015-10-30  8:36       ` Michal Hocko
2015-10-30 10:14       ` Michal Hocko
2015-10-30 10:14         ` Michal Hocko
2015-10-30 13:32         ` Tetsuo Handa
2015-10-30 13:32           ` Tetsuo Handa
2015-10-30 14:55           ` Michal Hocko
2015-10-30 14:55             ` Michal Hocko
2015-10-31  3:57         ` Hillf Danton
2015-10-31  3:57           ` Hillf Danton
2015-10-30  5:23   ` Kamezawa Hiroyuki
2015-10-30  5:23     ` Kamezawa Hiroyuki
2015-10-30  8:23     ` Michal Hocko
2015-10-30  8:23       ` Michal Hocko
2015-10-30  9:41       ` Kamezawa Hiroyuki
2015-10-30  9:41         ` Kamezawa Hiroyuki
2015-10-30 10:18         ` Michal Hocko
2015-10-30 10:18           ` Michal Hocko
2015-11-12 12:39   ` Michal Hocko
2015-11-12 12:39     ` Michal Hocko
2015-10-29 15:17 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages mhocko
2015-10-29 15:17   ` mhocko
2015-10-30  4:18   ` Hillf Danton
2015-10-30  4:18     ` Hillf Danton
2015-10-30  8:37     ` Michal Hocko
2015-10-30  8:37       ` Michal Hocko
2015-10-30  5:48   ` Kamezawa Hiroyuki
2015-10-30  5:48     ` Kamezawa Hiroyuki
2015-10-30  8:38     ` Michal Hocko
2015-10-30  8:38       ` Michal Hocko
2015-10-29 15:17 ` [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations mhocko
2015-10-29 15:17   ` mhocko
2015-11-12 12:44 ` RFC: OOM detection rework v1 Michal Hocko
2015-11-12 12:44   ` Michal Hocko
2015-11-18 13:03 [RFC 0/3] OOM detection rework v2 Michal Hocko
2015-11-18 13:03 ` [RFC 1/3] mm, oom: refactor oom detection Michal Hocko
2015-11-19 23:01   ` David Rientjes
2015-11-20  9:06     ` Michal Hocko
2015-11-20 23:27       ` David Rientjes
2015-11-23  9:41         ` Michal Hocko
2015-11-23 18:24           ` Johannes Weiner
2015-11-24 10:03             ` Michal Hocko
2015-12-01 12:56 [RFC 0/3] OOM detection rework v3 Michal Hocko
2015-12-01 12:56 ` [RFC 1/3] mm, oom: refactor oom detection Michal Hocko
2015-12-11 16:16   ` Johannes Weiner
2015-12-14 18:34     ` Michal Hocko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.