linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RFC: OOM detection rework v1
@ 2015-10-29 15:17 mhocko
  2015-10-29 15:17 ` [RFC 1/3] mm, oom: refactor oom detection mhocko
                   ` (3 more replies)
  0 siblings, 4 replies; 20+ messages in thread
From: mhocko @ 2015-10-29 15:17 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML

Hi,
as pointed by Linus [1][2] relying on zone_reclaimable as a way to
communicate the reclaim progress is rater dubious. I tend to agree,
not only it is really obscure, it is not hard to imagine cases where a
single page freed in the loop keeps all the reclaimers looping without
getting any progress because their gfp_mask wouldn't allow to get that
page anyway (e.g. single GFP_ATOMIC alloc and free loop). This is rather
rare so it doesn't happen in the practice but the current logic which we
have is rather obscure and hard to follow a also non-deterministic.

This is an attempt to make the OOM detection more deterministic and
easier to follow because each reclaimer basically tracks its own
progress which is implemented at the page allocator layer rather spread
out between the allocator and the reclaim. The more on the implementation
is described in the first patch.

I have tested several different scenarios but it should be clear that
testing OOM killer is quite hard to be representative. There is usually
a tiny gap between almost OOM and full blown OOM which is often time
sensitive. Anyway, I have tested the following 3 scenarios and I would
appreciate if there are more to test.

Testing environment: a virtual machine with 2G of RAM and 2CPUs without
any swap to make the OOM more deterministic.

1) 2 writers (each doing dd with 4M blocks to an xfs partition with 1G size,
   removes the files and starts over again) running in parallel for 10s
   to build up a lot of dirty pages when 100 parallel mem_eaters (anon
   private populated mmap which waits until it gets signal) with 80M
   each.

   This causes an OOM flood of course and I have compared both patched
   and unpatched kernels. The test is considered finished after there
   are no OOM conditions detected. This should tell us whether there are
   any excessive kills or some of them premature:

* base kernel
$ grep "Killed process" base-oom-run.log | tail -n1
[  836.589319] Killed process 3035 (mem_eater) total-vm:85852kB, anon-rss:81996kB, file-rss:344kB
$ grep "invoked oom-killer" base-oom-run.log | wc -l
78
$ grep "DMA32.*all_unreclaimable? no" base-oom-run.log | wc -l
0

* patched kernel
$ grep "Killed process" patched-oom-run.log | tail -n1
[  843.281009] Killed process 2998 (mem_eater) total-vm:85852kB, anon-rss:82000kB, file-rss:4kB
$ grep "invoked oom-killer" patched-oom-run.log | wc -l
77
$ grep "DMA32.*all_unreclaimable? no" patched-oom-run.log | wc -l
0

So they have finished in a comparable time and killed the very similar number
of processes and there doesn't seem to be any case where the patched kernel
would have DMA32 zone considered reclaimable.

2) 2 writers again with 10s of run and then 10 mem_eaters to consume as much
   memory as possible without triggering the OOM killer. This required a lot
   of tuning but I've considered 3 consecutive runs without OOM as a success.

* base kernel
size=$(awk '/MemFree/{printf "%dK", ($2/10)-(14*1024)}' /proc/meminfo)

* patched kernel
size=$(awk '/MemFree/{printf "%dK", ($2/10)-(7500)}' /proc/meminfo)

So it seems that the patched kernel handled the low mem conditions better and
fired OOM killer later.

3) Costly high-order allocations with a limited amount of memory.
   Start 10 memeaters in parallel each with
   size=$(awk '/MemTotal/{printf "%d\n", $2/10}' /proc/meminfo)
   This will cause an OOM killer which will kill one of them which will free up
   200M and then try to use all the remaining space for hugetlb pages. See how
   many of them will pass kill everything, wait 2s and try again.
   This tests whether we do not fail __GFP_REPEAT costly allocations too early
   now.
* base kernel
$ sort base-hugepages.log | uniq -c
      1 66
     19 67
     20 Trying to allocate 74

* patched kernel
$ sort patched-hugepages.log | uniq -c
      1 66
     19 67
     20 Trying to allocate 74

This also doesn't look very bad but this particular test is quite timing
sensitive.

The above results do seem optimistic but more loads should be tested
obviously. I would really appreciate a feedback on the approach I have
chosen before I go into more tuning. Is this viable way to go?

[1] http://lkml.kernel.org/r/CA+55aFwapaED7JV6zm-NVkP-jKie+eQ1vDXWrKD=SkbshZSgmw@mail.gmail.com
[2] http://lkml.kernel.org/r/CA+55aFxwg=vS2nrXsQhAUzPQDGb8aQpZi0M7UUh21ftBo-z46Q@mail.gmail.com

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [RFC 1/3] mm, oom: refactor oom detection
  2015-10-29 15:17 RFC: OOM detection rework v1 mhocko
@ 2015-10-29 15:17 ` mhocko
  2015-10-30  4:10   ` Hillf Danton
                     ` (2 more replies)
  2015-10-29 15:17 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages mhocko
                   ` (2 subsequent siblings)
  3 siblings, 3 replies; 20+ messages in thread
From: mhocko @ 2015-10-29 15:17 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML, Michal Hocko

From: Michal Hocko <mhocko@suse.com>

__alloc_pages_slowpath has traditionally relied on the direct reclaim
and did_some_progress as an indicator that it makes sense to retry
allocation rather than declaring OOM. shrink_zones had to rely on
zone_reclaimable if shrink_zone didn't make any progress to prevent
from pre mature OOM killer invocation - the LRU might be full of dirty
or writeback pages and direct reclaim cannot clean those up.

zone_reclaimable will allow to rescan the reclaimable lists several
times and restart if a page is freed. This is really subtle behavior
and it might lead to a livelock when a single freed page keeps allocator
looping but the current task will not be able to allocate that single
page. OOM killer would be more appropriate than looping without any
progress for unbounded amount of time.

This patch changes OOM detection logic and pulls it out from shrink_zone
which is too low to be appropriate for any high level decisions such as OOM
which is per zonelist property. It is __alloc_pages_slowpath which knows
how many attempts have been done and what was the progress so far
therefore it is more appropriate to implement this logic.

The new heuristic tries to be more deterministic and easier to follow.
Retrying makes sense only if the currently reclaimable memory + free
pages would allow the current allocation request to succeed (as per
__zone_watermark_ok) at least for one zone in the usable zonelist.

This alone wouldn't be sufficient, though, because the writeback might
get stuck and reclaimable pages might be pinned for a really long time
or even depend on the current allocation context. Therefore there is a
feedback mechanism implemented which reduces the reclaim target after
each reclaim round without any progress. This means that we should
eventually converge to only NR_FREE_PAGES as the target and fail on the
wmark check and proceed to OOM. The backoff is simple and linear with
1/16 of the reclaimable pages for each round without any progress. We
are optimistic and reset counter for successful reclaim rounds.

Costly high order pages mostly preserve their semantic and those without
__GFP_REPEAT fail right away while those which have the flag set will
back off after the amount of reclaimable pages reaches equivalent of the
requested order. The only difference is that if there was no progress
during the reclaim we rely on zone watermark check. This is more logical
thing to do than previous 1<<order attempts which were a result of
zone_reclaimable faking the progress.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 include/linux/swap.h |  1 +
 mm/page_alloc.c      | 69 ++++++++++++++++++++++++++++++++++++++++++++++------
 mm/vmscan.c          | 10 +-------
 3 files changed, 64 insertions(+), 16 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 9c7c4b418498..8298e1dc20f9 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -317,6 +317,7 @@ extern void lru_cache_add_active_or_unevictable(struct page *page,
 						struct vm_area_struct *vma);
 
 /* linux/mm/vmscan.c */
+extern unsigned long zone_reclaimable_pages(struct zone *zone);
 extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
 					gfp_t gfp_mask, nodemask_t *mask);
 extern int __isolate_lru_page(struct page *page, isolate_mode_t mode);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c73913648357..9c0abb75ad53 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2972,6 +2972,13 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
 	return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
 }
 
+/*
+ * Number of backoff steps for potentially reclaimable pages if the direct reclaim
+ * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
+ * reclaimable memory.
+ */
+#define MAX_STALL_BACKOFF 16
+
 static inline struct page *
 __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 						struct alloc_context *ac)
@@ -2984,6 +2991,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	enum migrate_mode migration_mode = MIGRATE_ASYNC;
 	bool deferred_compaction = false;
 	int contended_compaction = COMPACT_CONTENDED_NONE;
+	struct zone *zone;
+	struct zoneref *z;
+	int stall_backoff = 0;
 
 	/*
 	 * In the slowpath, we sanity check order to avoid ever trying to
@@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	if (gfp_mask & __GFP_NORETRY)
 		goto noretry;
 
-	/* Keep reclaiming pages as long as there is reasonable progress */
+	/*
+	 * Do not retry high order allocations unless they are __GFP_REPEAT
+	 * and even then do not retry endlessly.
+	 */
 	pages_reclaimed += did_some_progress;
-	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
-	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
-		/* Wait for some write requests to complete then retry */
-		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
-		goto retry;
+	if (order > PAGE_ALLOC_COSTLY_ORDER) {
+		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
+			goto noretry;
+
+		if (did_some_progress)
+			goto retry;
+	}
+
+	/*
+	 * Be optimistic and consider all pages on reclaimable LRUs as usable
+	 * but make sure we converge to OOM if we cannot make any progress after
+	 * multiple consecutive failed attempts.
+	 */
+	if (did_some_progress)
+		stall_backoff = 0;
+	else
+		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
+
+	/*
+	 * Keep reclaiming pages while there is a chance this will lead somewhere.
+	 * If none of the target zones can satisfy our allocation request even
+	 * if all reclaimable pages are considered then we are screwed and have
+	 * to go OOM.
+	 */
+	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
+		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
+		unsigned long reclaimable;
+		unsigned long target;
+
+		reclaimable = zone_reclaimable_pages(zone) +
+			      zone_page_state(zone, NR_ISOLATED_FILE) +
+			      zone_page_state(zone, NR_ISOLATED_ANON);
+		target = reclaimable;
+		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
+		target += free;
+
+		/*
+		 * Would the allocation succeed if we reclaimed the whole target?
+		 */
+		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
+				ac->high_zoneidx, alloc_flags, target)) {
+			/* Wait for some write requests to complete then retry */
+			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
+			goto retry;
+		}
 	}
 
 	/* Reclaim has failed us, start killing things */
@@ -3150,8 +3203,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		goto got_pg;
 
 	/* Retry as long as the OOM killer is making progress */
-	if (did_some_progress)
+	if (did_some_progress) {
+		stall_backoff = 0;
 		goto retry;
+	}
 
 noretry:
 	/*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c88d74ad9304..bc14217acd47 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -193,7 +193,7 @@ static bool sane_reclaim(struct scan_control *sc)
 }
 #endif
 
-static unsigned long zone_reclaimable_pages(struct zone *zone)
+unsigned long zone_reclaimable_pages(struct zone *zone)
 {
 	unsigned long nr;
 
@@ -2639,10 +2639,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 
 		if (shrink_zone(zone, sc, zone_idx(zone) == classzone_idx))
 			reclaimable = true;
-
-		if (global_reclaim(sc) &&
-		    !reclaimable && zone_reclaimable(zone))
-			reclaimable = true;
 	}
 
 	/*
@@ -2734,10 +2730,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		goto retry;
 	}
 
-	/* Any of the zones still reclaimable?  Don't OOM. */
-	if (zones_reclaimable)
-		return 1;
-
 	return 0;
 }
 
-- 
2.6.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-10-29 15:17 RFC: OOM detection rework v1 mhocko
  2015-10-29 15:17 ` [RFC 1/3] mm, oom: refactor oom detection mhocko
@ 2015-10-29 15:17 ` mhocko
  2015-10-30  4:18   ` Hillf Danton
  2015-10-30  5:48   ` Kamezawa Hiroyuki
  2015-10-29 15:17 ` [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations mhocko
  2015-11-12 12:44 ` RFC: OOM detection rework v1 Michal Hocko
  3 siblings, 2 replies; 20+ messages in thread
From: mhocko @ 2015-10-29 15:17 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML, Michal Hocko

From: Michal Hocko <mhocko@suse.com>

wait_iff_congested has been used to throttle allocator before it retried
another round of direct reclaim to allow the writeback to make some
progress and prevent reclaim from looping over dirty/writeback pages
without making any progress. We used to do congestion_wait before
0e093d99763e ("writeback: do not sleep on the congestion queue if
there are no congested BDIs or if significant congestion is not being
encountered in the current zone") but that led to undesirable stalls
and sleeping for the full timeout even when the BDI wasn't congested.
Hence wait_iff_congested was used instead. But it seems that even
wait_iff_congested doesn't work as expected. We might have a small file
LRU list with all pages dirty/writeback and yet the bdi is not congested
so this is just a cond_resched in the end and can end up triggering pre
mature OOM.

This patch replaces the unconditional wait_iff_congested by
congestion_wait which is executed only if we _know_ that the last round
of direct reclaim didn't make any progress and dirty+writeback pages are
more than a half of the reclaimable pages on the zone which might be
usable for our target allocation. This shouldn't reintroduce stalls
fixed by 0e093d99763e because congestion_wait is called only when we
are getting hopeless when sleeping is a better choice than OOM with many
pages under IO.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 mm/page_alloc.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9c0abb75ad53..0518ca6a9776 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		 */
 		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
 				ac->high_zoneidx, alloc_flags, target)) {
-			/* Wait for some write requests to complete then retry */
-			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
+			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
+				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
+
+			if (did_some_progress)
+				goto retry;
+
+			/*
+			 * If we didn't make any progress and have a lot of
+			 * dirty + writeback pages then we should wait for
+			 * an IO to complete to slow down the reclaim and
+			 * prevent from pre mature OOM
+			 */
+			if (2*(writeback + dirty) > reclaimable)
+				congestion_wait(BLK_RW_ASYNC, HZ/10);
+			else
+				cond_resched();
+
 			goto retry;
 		}
 	}
-- 
2.6.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations
  2015-10-29 15:17 RFC: OOM detection rework v1 mhocko
  2015-10-29 15:17 ` [RFC 1/3] mm, oom: refactor oom detection mhocko
  2015-10-29 15:17 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages mhocko
@ 2015-10-29 15:17 ` mhocko
  2015-11-12 12:44 ` RFC: OOM detection rework v1 Michal Hocko
  3 siblings, 0 replies; 20+ messages in thread
From: mhocko @ 2015-10-29 15:17 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML, Michal Hocko

From: Michal Hocko <mhocko@suse.com>

__alloc_pages_slowpath retries costly allocations until at least
order worth of pages were reclaimed or the watermark check for at least
one zone would succeed after all reclaiming all pages if the reclaim
hasn't made any progress.

The first condition was added by a41f24ea9fd6 ("page allocator: smarter
retry of costly-order allocations) and it assumed that lumpy reclaim
could have created a page of the sufficient order. Lumpy reclaim,
has been removed quite some time ago so the assumption doesn't hold
anymore. It would be more appropriate to check the compaction progress
instead but this patch simply removes the check and relies solely
on the watermark check.

To prevent from too many retries the stall_backoff is not reseted after
a reclaim which made progress because we cannot assume it helped high
order situation.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 mm/page_alloc.c | 21 +++++++--------------
 1 file changed, 7 insertions(+), 14 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0518ca6a9776..0dc1ca9b1219 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2986,7 +2986,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
 	struct page *page = NULL;
 	int alloc_flags;
-	unsigned long pages_reclaimed = 0;
 	unsigned long did_some_progress;
 	enum migrate_mode migration_mode = MIGRATE_ASYNC;
 	bool deferred_compaction = false;
@@ -3145,25 +3144,19 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	if (gfp_mask & __GFP_NORETRY)
 		goto noretry;
 
-	/*
-	 * Do not retry high order allocations unless they are __GFP_REPEAT
-	 * and even then do not retry endlessly.
-	 */
-	pages_reclaimed += did_some_progress;
-	if (order > PAGE_ALLOC_COSTLY_ORDER) {
-		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
-			goto noretry;
-
-		if (did_some_progress)
-			goto retry;
-	}
+	/* Do not retry high order allocations unless they are __GFP_REPEAT */
+	if (order > PAGE_ALLOC_COSTLY_ORDER && !(gfp_mask & __GFP_REPEAT))
+		goto noretry;
 
 	/*
 	 * Be optimistic and consider all pages on reclaimable LRUs as usable
 	 * but make sure we converge to OOM if we cannot make any progress after
 	 * multiple consecutive failed attempts.
+	 * Costly __GFP_REPEAT allocations might have made a progress but this
+	 * doesn't mean their order will become available due to high fragmentation
+	 * so do not reset the backoff for them
 	 */
-	if (did_some_progress)
+	if (did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER)
 		stall_backoff = 0;
 	else
 		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
-- 
2.6.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-29 15:17 ` [RFC 1/3] mm, oom: refactor oom detection mhocko
@ 2015-10-30  4:10   ` Hillf Danton
  2015-10-30  8:36     ` Michal Hocko
  2015-10-30  5:23   ` Kamezawa Hiroyuki
  2015-11-12 12:39   ` Michal Hocko
  2 siblings, 1 reply; 20+ messages in thread
From: Hillf Danton @ 2015-10-30  4:10 UTC (permalink / raw)
  To: mhocko, linux-mm
  Cc: 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML', 'Michal Hocko'

> +/*
> + * Number of backoff steps for potentially reclaimable pages if the direct reclaim
> + * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
> + * reclaimable memory.
> + */
> +#define MAX_STALL_BACKOFF 16
> +
>  static inline struct page *
>  __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  						struct alloc_context *ac)
> @@ -2984,6 +2991,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	enum migrate_mode migration_mode = MIGRATE_ASYNC;
>  	bool deferred_compaction = false;
>  	int contended_compaction = COMPACT_CONTENDED_NONE;
> +	struct zone *zone;
> +	struct zoneref *z;
> +	int stall_backoff = 0;
> 
>  	/*
>  	 * In the slowpath, we sanity check order to avoid ever trying to
> @@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	if (gfp_mask & __GFP_NORETRY)
>  		goto noretry;
> 
> -	/* Keep reclaiming pages as long as there is reasonable progress */
> +	/*
> +	 * Do not retry high order allocations unless they are __GFP_REPEAT
> +	 * and even then do not retry endlessly.
> +	 */
>  	pages_reclaimed += did_some_progress;
> -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> -		/* Wait for some write requests to complete then retry */
> -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> -		goto retry;
> +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> +		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
> +			goto noretry;
> +
> +		if (did_some_progress)
> +			goto retry;
> +	}
> +
> +	/*
> +	 * Be optimistic and consider all pages on reclaimable LRUs as usable
> +	 * but make sure we converge to OOM if we cannot make any progress after
> +	 * multiple consecutive failed attempts.
> +	 */
> +	if (did_some_progress)
> +		stall_backoff = 0;
> +	else
> +		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
> +
> +	/*
> +	 * Keep reclaiming pages while there is a chance this will lead somewhere.
> +	 * If none of the target zones can satisfy our allocation request even
> +	 * if all reclaimable pages are considered then we are screwed and have
> +	 * to go OOM.
> +	 */
> +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> +		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
> +		unsigned long reclaimable;
> +		unsigned long target;
> +
> +		reclaimable = zone_reclaimable_pages(zone) +
> +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> +			      zone_page_state(zone, NR_ISOLATED_ANON);
> +		target = reclaimable;
> +		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);

		target = reclaimable - stall_backoff * (1 + target/MAX_STALL_BACKOFF);
		             = reclaimable - stall_backoff - stall_backoff  * (target/MAX_STALL_BACKOFF);

then the first stall_backoff looks unreasonable.
I guess you mean
		target	= reclaimable - target * (stall_backoff/MAX_STALL_BACKOFF);
			= reclaimable - stall_back * (target/MAX_STALL_BACKOFF);

> +		target += free;
> +
> +		/*
> +		 * Would the allocation succeed if we reclaimed the whole target?
> +		 */
> +		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> +				ac->high_zoneidx, alloc_flags, target)) {
> +			/* Wait for some write requests to complete then retry */
> +			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> +			goto retry;
> +		}
>  	}
> 
[...]
/*
> @@ -2734,10 +2730,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
>  		goto retry;
>  	}
> 
> -	/* Any of the zones still reclaimable?  Don't OOM. */
> -	if (zones_reclaimable)
> -		return 1;
> -

Looks cleanup of zones_reclaimable left.
>  	return 0;
>  }
> 
> --
> 2.6.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-10-29 15:17 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages mhocko
@ 2015-10-30  4:18   ` Hillf Danton
  2015-10-30  8:37     ` Michal Hocko
  2015-10-30  5:48   ` Kamezawa Hiroyuki
  1 sibling, 1 reply; 20+ messages in thread
From: Hillf Danton @ 2015-10-30  4:18 UTC (permalink / raw)
  To: mhocko, linux-mm
  Cc: 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML', 'Michal Hocko',
	Christoph Lameter


> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  		 */
>  		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
>  				ac->high_zoneidx, alloc_flags, target)) {
> -			/* Wait for some write requests to complete then retry */
> -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> +
> +			if (did_some_progress)
> +				goto retry;
> +
> +			/*
> +			 * If we didn't make any progress and have a lot of
> +			 * dirty + writeback pages then we should wait for
> +			 * an IO to complete to slow down the reclaim and
> +			 * prevent from pre mature OOM
> +			 */
> +			if (2*(writeback + dirty) > reclaimable)
> +				congestion_wait(BLK_RW_ASYNC, HZ/10);
> +			else
> +				cond_resched();
> +

Looks the vmstat updater issue is not addressed.
>  			goto retry;
>  		}
>  	}
> --
> 2.6.1


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-29 15:17 ` [RFC 1/3] mm, oom: refactor oom detection mhocko
  2015-10-30  4:10   ` Hillf Danton
@ 2015-10-30  5:23   ` Kamezawa Hiroyuki
  2015-10-30  8:23     ` Michal Hocko
  2015-11-12 12:39   ` Michal Hocko
  2 siblings, 1 reply; 20+ messages in thread
From: Kamezawa Hiroyuki @ 2015-10-30  5:23 UTC (permalink / raw)
  To: mhocko, linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML, Michal Hocko

On 2015/10/30 0:17, mhocko@kernel.org wrote:
> From: Michal Hocko <mhocko@suse.com>
> 
> __alloc_pages_slowpath has traditionally relied on the direct reclaim
> and did_some_progress as an indicator that it makes sense to retry
> allocation rather than declaring OOM. shrink_zones had to rely on
> zone_reclaimable if shrink_zone didn't make any progress to prevent
> from pre mature OOM killer invocation - the LRU might be full of dirty
> or writeback pages and direct reclaim cannot clean those up.
> 
> zone_reclaimable will allow to rescan the reclaimable lists several
> times and restart if a page is freed. This is really subtle behavior
> and it might lead to a livelock when a single freed page keeps allocator
> looping but the current task will not be able to allocate that single
> page. OOM killer would be more appropriate than looping without any
> progress for unbounded amount of time.
> 
> This patch changes OOM detection logic and pulls it out from shrink_zone
> which is too low to be appropriate for any high level decisions such as OOM
> which is per zonelist property. It is __alloc_pages_slowpath which knows
> how many attempts have been done and what was the progress so far
> therefore it is more appropriate to implement this logic.
> 
> The new heuristic tries to be more deterministic and easier to follow.
> Retrying makes sense only if the currently reclaimable memory + free
> pages would allow the current allocation request to succeed (as per
> __zone_watermark_ok) at least for one zone in the usable zonelist.
> 
> This alone wouldn't be sufficient, though, because the writeback might
> get stuck and reclaimable pages might be pinned for a really long time
> or even depend on the current allocation context. Therefore there is a
> feedback mechanism implemented which reduces the reclaim target after
> each reclaim round without any progress. This means that we should
> eventually converge to only NR_FREE_PAGES as the target and fail on the
> wmark check and proceed to OOM. The backoff is simple and linear with
> 1/16 of the reclaimable pages for each round without any progress. We
> are optimistic and reset counter for successful reclaim rounds.
> 
> Costly high order pages mostly preserve their semantic and those without
> __GFP_REPEAT fail right away while those which have the flag set will
> back off after the amount of reclaimable pages reaches equivalent of the
> requested order. The only difference is that if there was no progress
> during the reclaim we rely on zone watermark check. This is more logical
> thing to do than previous 1<<order attempts which were a result of
> zone_reclaimable faking the progress.
> 
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
>   include/linux/swap.h |  1 +
>   mm/page_alloc.c      | 69 ++++++++++++++++++++++++++++++++++++++++++++++------
>   mm/vmscan.c          | 10 +-------
>   3 files changed, 64 insertions(+), 16 deletions(-)
> 
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 9c7c4b418498..8298e1dc20f9 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -317,6 +317,7 @@ extern void lru_cache_add_active_or_unevictable(struct page *page,
>   						struct vm_area_struct *vma);
>   
>   /* linux/mm/vmscan.c */
> +extern unsigned long zone_reclaimable_pages(struct zone *zone);
>   extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
>   					gfp_t gfp_mask, nodemask_t *mask);
>   extern int __isolate_lru_page(struct page *page, isolate_mode_t mode);
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c73913648357..9c0abb75ad53 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2972,6 +2972,13 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
>   	return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
>   }
>   
> +/*
> + * Number of backoff steps for potentially reclaimable pages if the direct reclaim
> + * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
> + * reclaimable memory.
> + */
> +#define MAX_STALL_BACKOFF 16
> +
>   static inline struct page *
>   __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>   						struct alloc_context *ac)
> @@ -2984,6 +2991,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>   	enum migrate_mode migration_mode = MIGRATE_ASYNC;
>   	bool deferred_compaction = false;
>   	int contended_compaction = COMPACT_CONTENDED_NONE;
> +	struct zone *zone;
> +	struct zoneref *z;
> +	int stall_backoff = 0;
>   
>   	/*
>   	 * In the slowpath, we sanity check order to avoid ever trying to
> @@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>   	if (gfp_mask & __GFP_NORETRY)
>   		goto noretry;
>   
> -	/* Keep reclaiming pages as long as there is reasonable progress */
> +	/*
> +	 * Do not retry high order allocations unless they are __GFP_REPEAT
> +	 * and even then do not retry endlessly.
> +	 */
>   	pages_reclaimed += did_some_progress;
> -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> -		/* Wait for some write requests to complete then retry */
> -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> -		goto retry;
> +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> +		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
> +			goto noretry;
> +
> +		if (did_some_progress)
> +			goto retry;

why directly retry here ?


> +	}
> +
> +	/*
> +	 * Be optimistic and consider all pages on reclaimable LRUs as usable
> +	 * but make sure we converge to OOM if we cannot make any progress after
> +	 * multiple consecutive failed attempts.
> +	 */
> +	if (did_some_progress)
> +		stall_backoff = 0;
> +	else
> +		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
> +
> +	/*
> +	 * Keep reclaiming pages while there is a chance this will lead somewhere.
> +	 * If none of the target zones can satisfy our allocation request even
> +	 * if all reclaimable pages are considered then we are screwed and have
> +	 * to go OOM.
> +	 */
> +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> +		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
> +		unsigned long reclaimable;
> +		unsigned long target;
> +
> +		reclaimable = zone_reclaimable_pages(zone) +
> +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> +			      zone_page_state(zone, NR_ISOLATED_ANON);
> +		target = reclaimable;
> +		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> +		target += free;
> +
> +		/*
> +		 * Would the allocation succeed if we reclaimed the whole target?
> +		 */
> +		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> +				ac->high_zoneidx, alloc_flags, target)) {
> +			/* Wait for some write requests to complete then retry */
> +			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> +			goto retry;
> +		}
>   	}
>   
>   	/* Reclaim has failed us, start killing things */
> @@ -3150,8 +3203,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>   		goto got_pg;
>   
>   	/* Retry as long as the OOM killer is making progress */
> -	if (did_some_progress)
> +	if (did_some_progress) {
> +		stall_backoff = 0;
>   		goto retry;
> +	}

Umm ? I'm sorry that I didn't notice page allocation may fail even if order < PAGE_ALLOC_COSTLY_ORDER.
I thought old logic ignores did_some_progress. It seems a big change.

So, now, 0-order page allocation may fail in a OOM situation ?

Thanks,
-Kame







^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-10-29 15:17 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages mhocko
  2015-10-30  4:18   ` Hillf Danton
@ 2015-10-30  5:48   ` Kamezawa Hiroyuki
  2015-10-30  8:38     ` Michal Hocko
  1 sibling, 1 reply; 20+ messages in thread
From: Kamezawa Hiroyuki @ 2015-10-30  5:48 UTC (permalink / raw)
  To: mhocko, linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML, Michal Hocko

On 2015/10/30 0:17, mhocko@kernel.org wrote:
> From: Michal Hocko <mhocko@suse.com>
> 
> wait_iff_congested has been used to throttle allocator before it retried
> another round of direct reclaim to allow the writeback to make some
> progress and prevent reclaim from looping over dirty/writeback pages
> without making any progress. We used to do congestion_wait before
> 0e093d99763e ("writeback: do not sleep on the congestion queue if
> there are no congested BDIs or if significant congestion is not being
> encountered in the current zone") but that led to undesirable stalls
> and sleeping for the full timeout even when the BDI wasn't congested.
> Hence wait_iff_congested was used instead. But it seems that even
> wait_iff_congested doesn't work as expected. We might have a small file
> LRU list with all pages dirty/writeback and yet the bdi is not congested
> so this is just a cond_resched in the end and can end up triggering pre
> mature OOM.
> 
> This patch replaces the unconditional wait_iff_congested by
> congestion_wait which is executed only if we _know_ that the last round
> of direct reclaim didn't make any progress and dirty+writeback pages are
> more than a half of the reclaimable pages on the zone which might be
> usable for our target allocation. This shouldn't reintroduce stalls
> fixed by 0e093d99763e because congestion_wait is called only when we
> are getting hopeless when sleeping is a better choice than OOM with many
> pages under IO.
> 
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
>   mm/page_alloc.c | 19 +++++++++++++++++--
>   1 file changed, 17 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 9c0abb75ad53..0518ca6a9776 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>   		 */
>   		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
>   				ac->high_zoneidx, alloc_flags, target)) {
> -			/* Wait for some write requests to complete then retry */
> -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> +
> +			if (did_some_progress)
> +				goto retry;
> +
> +			/*
> +			 * If we didn't make any progress and have a lot of
> +			 * dirty + writeback pages then we should wait for
> +			 * an IO to complete to slow down the reclaim and
> +			 * prevent from pre mature OOM
> +			 */
> +			if (2*(writeback + dirty) > reclaimable)

Doesn't this add unnecessary latency if other zones have enough clean memory ?


Thanks,
-Kame
 



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-30  5:23   ` Kamezawa Hiroyuki
@ 2015-10-30  8:23     ` Michal Hocko
  2015-10-30  9:41       ` Kamezawa Hiroyuki
  0 siblings, 1 reply; 20+ messages in thread
From: Michal Hocko @ 2015-10-30  8:23 UTC (permalink / raw)
  To: Kamezawa Hiroyuki
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Rik van Riel, David Rientjes, Tetsuo Handa,
	LKML

On Fri 30-10-15 14:23:59, KAMEZAWA Hiroyuki wrote:
> On 2015/10/30 0:17, mhocko@kernel.org wrote:
[...]
> > @@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >   	if (gfp_mask & __GFP_NORETRY)
> >   		goto noretry;
> >   
> > -	/* Keep reclaiming pages as long as there is reasonable progress */
> > +	/*
> > +	 * Do not retry high order allocations unless they are __GFP_REPEAT
> > +	 * and even then do not retry endlessly.
> > +	 */
> >   	pages_reclaimed += did_some_progress;
> > -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> > -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> > -		/* Wait for some write requests to complete then retry */
> > -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> > -		goto retry;
> > +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> > +		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
> > +			goto noretry;
> > +
> > +		if (did_some_progress)
> > +			goto retry;
> 
> why directly retry here ?

Because I wanted to preserve the previous logic for GFP_REPEAT as much
as possible here and do an incremental change in the later patch.

[...]

> > @@ -3150,8 +3203,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >   		goto got_pg;
> >   
> >   	/* Retry as long as the OOM killer is making progress */
> > -	if (did_some_progress)
> > +	if (did_some_progress) {
> > +		stall_backoff = 0;
> >   		goto retry;
> > +	}
> 
> Umm ? I'm sorry that I didn't notice page allocation may fail even
> if order < PAGE_ALLOC_COSTLY_ORDER.  I thought old logic ignores
> did_some_progress. It seems a big change.

__alloc_pages_may_oom will set did_some_progress

> So, now, 0-order page allocation may fail in a OOM situation ?

No they don't normally and this patch doesn't change the logic here.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-30  4:10   ` Hillf Danton
@ 2015-10-30  8:36     ` Michal Hocko
  2015-10-30 10:14       ` Michal Hocko
  0 siblings, 1 reply; 20+ messages in thread
From: Michal Hocko @ 2015-10-30  8:36 UTC (permalink / raw)
  To: Hillf Danton
  Cc: linux-mm, 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML'

On Fri 30-10-15 12:10:15, Hillf Danton wrote:
[...]
> > +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> > +		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
> > +		unsigned long reclaimable;
> > +		unsigned long target;
> > +
> > +		reclaimable = zone_reclaimable_pages(zone) +
> > +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> > +			      zone_page_state(zone, NR_ISOLATED_ANON);
> > +		target = reclaimable;
> > +		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> 
> 		target = reclaimable - stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> 		             = reclaimable - stall_backoff - stall_backoff  * (target/MAX_STALL_BACKOFF);
> 
> then the first stall_backoff looks unreasonable.

First stall_backoff is off by 1 but that shouldn't make any difference.

> I guess you mean
> 		target	= reclaimable - target * (stall_backoff/MAX_STALL_BACKOFF);
> 			= reclaimable - stall_back * (target/MAX_STALL_BACKOFF);

No the reason I used the bias is to converge for MAX_STALL_BACKOFF. If
you have target which is not divisible by MAX_STALL_BACKOFF then the
rounding would get target > 0 and so we wouldn't converge. With the +1
you underflow which is MAX_STALL_BACKOFF in maximum which should be
fixed up by the free memory. Maybe a check for free < MAX_STALL_BACKOFF
would be good but I didn't get that far with this.

[...]

> /*
> > @@ -2734,10 +2730,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
> >  		goto retry;
> >  	}
> > 
> > -	/* Any of the zones still reclaimable?  Don't OOM. */
> > -	if (zones_reclaimable)
> > -		return 1;
> > -
> 
> Looks cleanup of zones_reclaimable left.

Removed. Thanks!

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-10-30  4:18   ` Hillf Danton
@ 2015-10-30  8:37     ` Michal Hocko
  0 siblings, 0 replies; 20+ messages in thread
From: Michal Hocko @ 2015-10-30  8:37 UTC (permalink / raw)
  To: Hillf Danton
  Cc: linux-mm, 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML',
	Christoph Lameter

On Fri 30-10-15 12:18:50, Hillf Danton wrote:
> 
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >  		 */
> >  		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> >  				ac->high_zoneidx, alloc_flags, target)) {
> > -			/* Wait for some write requests to complete then retry */
> > -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> > +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> > +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> > +
> > +			if (did_some_progress)
> > +				goto retry;
> > +
> > +			/*
> > +			 * If we didn't make any progress and have a lot of
> > +			 * dirty + writeback pages then we should wait for
> > +			 * an IO to complete to slow down the reclaim and
> > +			 * prevent from pre mature OOM
> > +			 */
> > +			if (2*(writeback + dirty) > reclaimable)
> > +				congestion_wait(BLK_RW_ASYNC, HZ/10);
> > +			else
> > +				cond_resched();
> > +
> 
> Looks the vmstat updater issue is not addressed.

This is not the aim of this patch set.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-10-30  5:48   ` Kamezawa Hiroyuki
@ 2015-10-30  8:38     ` Michal Hocko
  0 siblings, 0 replies; 20+ messages in thread
From: Michal Hocko @ 2015-10-30  8:38 UTC (permalink / raw)
  To: Kamezawa Hiroyuki
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Rik van Riel, David Rientjes, Tetsuo Handa,
	LKML

On Fri 30-10-15 14:48:40, KAMEZAWA Hiroyuki wrote:
[...]
> > @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >   		 */
> >   		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> >   				ac->high_zoneidx, alloc_flags, target)) {
> > -			/* Wait for some write requests to complete then retry */
> > -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> > +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> > +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> > +
> > +			if (did_some_progress)
> > +				goto retry;
> > +
> > +			/*
> > +			 * If we didn't make any progress and have a lot of
> > +			 * dirty + writeback pages then we should wait for
> > +			 * an IO to complete to slow down the reclaim and
> > +			 * prevent from pre mature OOM
> > +			 */
> > +			if (2*(writeback + dirty) > reclaimable)
> 
> Doesn't this add unnecessary latency if other zones have enough clean memory ?

We know we haven't made any progress the last reclaim round so any zone
with a clean memory is rather unlikely.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-30  8:23     ` Michal Hocko
@ 2015-10-30  9:41       ` Kamezawa Hiroyuki
  2015-10-30 10:18         ` Michal Hocko
  0 siblings, 1 reply; 20+ messages in thread
From: Kamezawa Hiroyuki @ 2015-10-30  9:41 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Rik van Riel, David Rientjes, Tetsuo Handa,
	LKML

On 2015/10/30 17:23, Michal Hocko wrote:
> On Fri 30-10-15 14:23:59, KAMEZAWA Hiroyuki wrote:
>> On 2015/10/30 0:17, mhocko@kernel.org wrote:
> [...]
>>> @@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>>>    	if (gfp_mask & __GFP_NORETRY)
>>>    		goto noretry;
>>>
>>> -	/* Keep reclaiming pages as long as there is reasonable progress */
>>> +	/*
>>> +	 * Do not retry high order allocations unless they are __GFP_REPEAT
>>> +	 * and even then do not retry endlessly.
>>> +	 */
>>>    	pages_reclaimed += did_some_progress;
>>> -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
>>> -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
>>> -		/* Wait for some write requests to complete then retry */
>>> -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
>>> -		goto retry;
>>> +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
>>> +		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
>>> +			goto noretry;
>>> +
>>> +		if (did_some_progress)
>>> +			goto retry;
>>
>> why directly retry here ?
>
> Because I wanted to preserve the previous logic for GFP_REPEAT as much
> as possible here and do an incremental change in the later patch.
>

I see.

> [...]
>
>>> @@ -3150,8 +3203,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>>>    		goto got_pg;
>>>
>>>    	/* Retry as long as the OOM killer is making progress */
>>> -	if (did_some_progress)
>>> +	if (did_some_progress) {
>>> +		stall_backoff = 0;
>>>    		goto retry;
>>> +	}
>>
>> Umm ? I'm sorry that I didn't notice page allocation may fail even
>> if order < PAGE_ALLOC_COSTLY_ORDER.  I thought old logic ignores
>> did_some_progress. It seems a big change.
>
> __alloc_pages_may_oom will set did_some_progress
>
>> So, now, 0-order page allocation may fail in a OOM situation ?
>
> No they don't normally and this patch doesn't change the logic here.
>

I understand your patch doesn't change the behavior.
Looking into __alloc_pages_may_oom(), *did_some_progress is finally set by

      if (out_of_memory(&oc) || WARN_ON_ONCE(gfp_mask & __GFP_NOFAIL))
                 *did_some_progress = 1;

...depends on out_of_memory() return value.
Now, allocation may fail if oom-killer is disabled.... Isn't it complicated ?

Shouldn't we have

  if (order < PAGE_ALLOC_COSTLY_ORDER)
     goto retry;

here ?

Thanks,
-Kame







^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-30  8:36     ` Michal Hocko
@ 2015-10-30 10:14       ` Michal Hocko
  2015-10-30 13:32         ` Tetsuo Handa
  2015-10-31  3:57         ` Hillf Danton
  0 siblings, 2 replies; 20+ messages in thread
From: Michal Hocko @ 2015-10-30 10:14 UTC (permalink / raw)
  To: Hillf Danton
  Cc: linux-mm, 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML'

On Fri 30-10-15 09:36:26, Michal Hocko wrote:
> On Fri 30-10-15 12:10:15, Hillf Danton wrote:
> [...]
> > > +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> > > +		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
> > > +		unsigned long reclaimable;
> > > +		unsigned long target;
> > > +
> > > +		reclaimable = zone_reclaimable_pages(zone) +
> > > +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> > > +			      zone_page_state(zone, NR_ISOLATED_ANON);
> > > +		target = reclaimable;
> > > +		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> > 
> > 		target = reclaimable - stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> > 		             = reclaimable - stall_backoff - stall_backoff  * (target/MAX_STALL_BACKOFF);
> > 
> > then the first stall_backoff looks unreasonable.
> 
> First stall_backoff is off by 1 but that shouldn't make any difference.
> 
> > I guess you mean
> > 		target	= reclaimable - target * (stall_backoff/MAX_STALL_BACKOFF);
> > 			= reclaimable - stall_back * (target/MAX_STALL_BACKOFF);
> 
> No the reason I used the bias is to converge for MAX_STALL_BACKOFF. If
> you have target which is not divisible by MAX_STALL_BACKOFF then the
> rounding would get target > 0 and so we wouldn't converge. With the +1
> you underflow which is MAX_STALL_BACKOFF in maximum which should be
> fixed up by the free memory. Maybe a check for free < MAX_STALL_BACKOFF
> would be good but I didn't get that far with this.

I've ended up with the following after all. It uses ceiling for the
division this should be underflow safe albeit less readable (at least
for me).
---
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0dc1ca9b1219..c9a4e62f234e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3176,7 +3176,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 			      zone_page_state(zone, NR_ISOLATED_FILE) +
 			      zone_page_state(zone, NR_ISOLATED_ANON);
 		target = reclaimable;
-		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
+		target -= (stall_backoff * target + MAX_STALL_BACKOFF - 1) / MAX_STALL_BACKOFF;
 		target += free;
 
 		/*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index bc14217acd47..0b3ec972ec7a 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2672,7 +2672,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 	int initial_priority = sc->priority;
 	unsigned long total_scanned = 0;
 	unsigned long writeback_threshold;
-	bool zones_reclaimable;
 retry:
 	delayacct_freepages_start();
 
@@ -2683,7 +2682,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup,
 				sc->priority);
 		sc->nr_scanned = 0;
-		zones_reclaimable = shrink_zones(zonelist, sc);
+		shrink_zones(zonelist, sc);
 
 		total_scanned += sc->nr_scanned;
 		if (sc->nr_reclaimed >= sc->nr_to_reclaim)
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-30  9:41       ` Kamezawa Hiroyuki
@ 2015-10-30 10:18         ` Michal Hocko
  0 siblings, 0 replies; 20+ messages in thread
From: Michal Hocko @ 2015-10-30 10:18 UTC (permalink / raw)
  To: Kamezawa Hiroyuki
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Rik van Riel, David Rientjes, Tetsuo Handa,
	LKML

On Fri 30-10-15 18:41:30, KAMEZAWA Hiroyuki wrote:
[...]
> >>So, now, 0-order page allocation may fail in a OOM situation ?
> >
> >No they don't normally and this patch doesn't change the logic here.
> >
> 
> I understand your patch doesn't change the behavior.
> Looking into __alloc_pages_may_oom(), *did_some_progress is finally set by
> 
>      if (out_of_memory(&oc) || WARN_ON_ONCE(gfp_mask & __GFP_NOFAIL))
>                 *did_some_progress = 1;
> 
> ...depends on out_of_memory() return value.
> Now, allocation may fail if oom-killer is disabled.... Isn't it complicated ?

Yes and there shouldn't be any allocations after OOM killer has been
disabled. The userspace is already frozen and there shouldn't be any
other memory activity.
 
> Shouldn't we have
> 
>  if (order < PAGE_ALLOC_COSTLY_ORDER)
>     goto retry;
> 
> here ?

How could we move on during the suspend if the reclaim doesn't proceed
and we cannot really kill anything to free up memory resources. We are
simply past the moment any userspace can be woken up. Anyway this is
tangent to this particular patch series.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-30 10:14       ` Michal Hocko
@ 2015-10-30 13:32         ` Tetsuo Handa
  2015-10-30 14:55           ` Michal Hocko
  2015-10-31  3:57         ` Hillf Danton
  1 sibling, 1 reply; 20+ messages in thread
From: Tetsuo Handa @ 2015-10-30 13:32 UTC (permalink / raw)
  To: mhocko
  Cc: hillf.zj, linux-mm, akpm, torvalds, mgorman, hannes, riel,
	rientjes, linux-kernel

Michal Hocko wrote:
> +		target -= (stall_backoff * target + MAX_STALL_BACKOFF - 1) / MAX_STALL_BACKOFF;
target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);



Michal Hocko wrote:
> This alone wouldn't be sufficient, though, because the writeback might
> get stuck and reclaimable pages might be pinned for a really long time
> or even depend on the current allocation context.

Is this a dependency which I worried at
http://lkml.kernel.org/r/201510262044.BAI43236.FOMSFFOtOVLJQH@I-love.SAKURA.ne.jp ?

>                                                   Therefore there is a
> feedback mechanism implemented which reduces the reclaim target after
> each reclaim round without any progress.

If yes, this feedback mechanism will help avoiding infinite wait loop.

>                                          This means that we should
> eventually converge to only NR_FREE_PAGES as the target and fail on the
> wmark check and proceed to OOM.

What if all in-flight allocation requests are !__GFP_NOFAIL && !__GFP_FS ?
(In other words, either "no __GFP_FS allocations are in-flight" or "all
__GFP_FS allocations are in-flight but are either waiting for completion
of operations which depend on !__GFP_FS allocations with a lock held or
waiting for that lock to be released".)

Don't we need to call out_of_memory() even though !__GFP_FS allocations?

>                                 The backoff is simple and linear with
> 1/16 of the reclaimable pages for each round without any progress. We
> are optimistic and reset counter for successful reclaim rounds.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-30 13:32         ` Tetsuo Handa
@ 2015-10-30 14:55           ` Michal Hocko
  0 siblings, 0 replies; 20+ messages in thread
From: Michal Hocko @ 2015-10-30 14:55 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: hillf.zj, linux-mm, akpm, torvalds, mgorman, hannes, riel,
	rientjes, linux-kernel

On Fri 30-10-15 22:32:27, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > +		target -= (stall_backoff * target + MAX_STALL_BACKOFF - 1) / MAX_STALL_BACKOFF;
> target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);

Ohh, we have a macro for that. Good to know. Thanks. It sure looks much
easier to follow.
 
> Michal Hocko wrote:
> > This alone wouldn't be sufficient, though, because the writeback might
> > get stuck and reclaimable pages might be pinned for a really long time
> > or even depend on the current allocation context.
> 
> Is this a dependency which I worried at
> http://lkml.kernel.org/r/201510262044.BAI43236.FOMSFFOtOVLJQH@I-love.SAKURA.ne.jp ?

Yes, I had restricted allocation contexts in mind here.

> >                                                   Therefore there is a
> > feedback mechanism implemented which reduces the reclaim target after
> > each reclaim round without any progress.
> 
> If yes, this feedback mechanism will help avoiding infinite wait loop.
> 
> >                                          This means that we should
> > eventually converge to only NR_FREE_PAGES as the target and fail on the
> > wmark check and proceed to OOM.
> 
> What if all in-flight allocation requests are !__GFP_NOFAIL && !__GFP_FS ?

Then we will loop like crazy hoping that _something_ will reclaim memory
for us. Same as we do now.

> (In other words, either "no __GFP_FS allocations are in-flight" or "all
> __GFP_FS allocations are in-flight but are either waiting for completion
> of operations which depend on !__GFP_FS allocations with a lock held or
> waiting for that lock to be released".)
> 
> Don't we need to call out_of_memory() even though !__GFP_FS allocations?

I do not think this is in scope of this patch series. I am trying to
normalize the OOM detection and GFP_FS is a separate beast and we do not
have enough counters to decide the whether OOM killer would be
premature or not (e.g. we do not know how much memory is unreclaimable
just because of NOFS context). I am convinced that GFP_FS simply has to
fail the allocation as I've suggested quite some time ago and plan to
revisit it soon(ish). I consider the two orthogonal.

Thanks!
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-30 10:14       ` Michal Hocko
  2015-10-30 13:32         ` Tetsuo Handa
@ 2015-10-31  3:57         ` Hillf Danton
  1 sibling, 0 replies; 20+ messages in thread
From: Hillf Danton @ 2015-10-31  3:57 UTC (permalink / raw)
  To: 'Michal Hocko'
  Cc: linux-mm, 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML'

> On Fri 30-10-15 09:36:26, Michal Hocko wrote:
> > On Fri 30-10-15 12:10:15, Hillf Danton wrote:
> > [...]
> > > > +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> > > > +		unsigned long free = zone_page_state(zone, NR_FREE_PAGES);
> > > > +		unsigned long reclaimable;
> > > > +		unsigned long target;
> > > > +
> > > > +		reclaimable = zone_reclaimable_pages(zone) +
> > > > +			      zone_page_state(zone, NR_ISOLATED_FILE) +
> > > > +			      zone_page_state(zone, NR_ISOLATED_ANON);
> > > > +		target = reclaimable;
> > > > +		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> > >
> > > 		target = reclaimable - stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> > > 		             = reclaimable - stall_backoff - stall_backoff  * (target/MAX_STALL_BACKOFF);
> > >
> > > then the first stall_backoff looks unreasonable.
> >
> > First stall_backoff is off by 1 but that shouldn't make any difference.
> >
> > > I guess you mean
> > > 		target	= reclaimable - target * (stall_backoff/MAX_STALL_BACKOFF);
> > > 			= reclaimable - stall_back * (target/MAX_STALL_BACKOFF);
> >
> > No the reason I used the bias is to converge for MAX_STALL_BACKOFF. If
> > you have target which is not divisible by MAX_STALL_BACKOFF then the
> > rounding would get target > 0 and so we wouldn't converge. With the +1
> > you underflow which is MAX_STALL_BACKOFF in maximum which should be
> > fixed up by the free memory. Maybe a check for free < MAX_STALL_BACKOFF
> > would be good but I didn't get that far with this.
> 
> I've ended up with the following after all. It uses ceiling for the
> division this should be underflow safe albeit less readable (at least
> for me).

Looks good, thanks.

Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>

> ---
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 0dc1ca9b1219..c9a4e62f234e 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3176,7 +3176,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  			      zone_page_state(zone, NR_ISOLATED_FILE) +
>  			      zone_page_state(zone, NR_ISOLATED_ANON);
>  		target = reclaimable;
> -		target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF);
> +		target -= (stall_backoff * target + MAX_STALL_BACKOFF - 1) / MAX_STALL_BACKOFF;
>  		target += free;
> 
>  		/*
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index bc14217acd47..0b3ec972ec7a 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2672,7 +2672,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
>  	int initial_priority = sc->priority;
>  	unsigned long total_scanned = 0;
>  	unsigned long writeback_threshold;
> -	bool zones_reclaimable;
>  retry:
>  	delayacct_freepages_start();
> 
> @@ -2683,7 +2682,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
>  		vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup,
>  				sc->priority);
>  		sc->nr_scanned = 0;
> -		zones_reclaimable = shrink_zones(zonelist, sc);
> +		shrink_zones(zonelist, sc);
> 
>  		total_scanned += sc->nr_scanned;
>  		if (sc->nr_reclaimed >= sc->nr_to_reclaim)
> --
> Michal Hocko
> SUSE Labs


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-10-29 15:17 ` [RFC 1/3] mm, oom: refactor oom detection mhocko
  2015-10-30  4:10   ` Hillf Danton
  2015-10-30  5:23   ` Kamezawa Hiroyuki
@ 2015-11-12 12:39   ` Michal Hocko
  2 siblings, 0 replies; 20+ messages in thread
From: Michal Hocko @ 2015-11-12 12:39 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML

On Thu 29-10-15 16:17:13, mhocko@kernel.org wrote:
[...]
> @@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	if (gfp_mask & __GFP_NORETRY)
>  		goto noretry;
>  
> -	/* Keep reclaiming pages as long as there is reasonable progress */
> +	/*
> +	 * Do not retry high order allocations unless they are __GFP_REPEAT
> +	 * and even then do not retry endlessly.
> +	 */
>  	pages_reclaimed += did_some_progress;
> -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> -		/* Wait for some write requests to complete then retry */
> -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> -		goto retry;
> +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> +		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
> +			goto noretry;

This is not correct because we could fail __GFP_NOFAIL allocation. It
should do
		if (!(gfp_mask & __GFP_NOFAIL) &&
		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
			goto noretry;

It looks rather ugly but it will get much better with patch3 which
reorganizes the code a bit

> +
> +		if (did_some_progress)
> +			goto retry;
> +	}

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: RFC: OOM detection rework v1
  2015-10-29 15:17 RFC: OOM detection rework v1 mhocko
                   ` (2 preceding siblings ...)
  2015-10-29 15:17 ` [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations mhocko
@ 2015-11-12 12:44 ` Michal Hocko
  3 siblings, 0 replies; 20+ messages in thread
From: Michal Hocko @ 2015-11-12 12:44 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML

Just a heads up. I plan to repost this with changes reflecting the
feedback so far after merge window closes. There were only few minor
style fixes and one bug fixe (GFP_NOFAIL vs. costly high order
allocations). I know people are busy with the merge window now and
I hope that the future post will be a better basis for further
discussion.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2015-11-12 12:44 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-29 15:17 RFC: OOM detection rework v1 mhocko
2015-10-29 15:17 ` [RFC 1/3] mm, oom: refactor oom detection mhocko
2015-10-30  4:10   ` Hillf Danton
2015-10-30  8:36     ` Michal Hocko
2015-10-30 10:14       ` Michal Hocko
2015-10-30 13:32         ` Tetsuo Handa
2015-10-30 14:55           ` Michal Hocko
2015-10-31  3:57         ` Hillf Danton
2015-10-30  5:23   ` Kamezawa Hiroyuki
2015-10-30  8:23     ` Michal Hocko
2015-10-30  9:41       ` Kamezawa Hiroyuki
2015-10-30 10:18         ` Michal Hocko
2015-11-12 12:39   ` Michal Hocko
2015-10-29 15:17 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages mhocko
2015-10-30  4:18   ` Hillf Danton
2015-10-30  8:37     ` Michal Hocko
2015-10-30  5:48   ` Kamezawa Hiroyuki
2015-10-30  8:38     ` Michal Hocko
2015-10-29 15:17 ` [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations mhocko
2015-11-12 12:44 ` RFC: OOM detection rework v1 Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).