All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC 0/3] OOM detection rework v3
@ 2015-12-01 12:56 Michal Hocko
  2015-12-01 12:56 ` [RFC 1/3] mm, oom: refactor oom detection Michal Hocko
                   ` (3 more replies)
  0 siblings, 4 replies; 24+ messages in thread
From: Michal Hocko @ 2015-12-01 12:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki

Hi,

This is v3 of the RFC. The previous version was posted [1]
* Changes since v2
- rebased on top of mmotm-2015-11-25-17-08 which includes
  wait_iff_congested related changes which needed refresh in
  patch#1 and patch#2
- use zone_page_state_snapshot for NR_FREE_PAGES per David
- shrink_zones doesn't need to return anything per David
- retested because the major kernel version has changed since
  the last time (4.2 -> 4.3 based kernel + mmotm patches)

* Changes since v1
- backoff calculation was de-obfuscated by using DIV_ROUND_UP
- __GFP_NOFAIL high order migh fail fixed - theoretical bug

as pointed by Linus [2][3] relying on zone_reclaimable as a way to
communicate the reclaim progress is rater dubious. I tend to agree,
not only it is really obscure, it is not hard to imagine cases where a
single page freed in the loop keeps all the reclaimers looping without
getting any progress because their gfp_mask wouldn't allow to get that
page anyway (e.g. single GFP_ATOMIC alloc and free loop). This is rather
rare so it doesn't happen in the practice but the current logic which we
have is rather obscure and hard to follow a also non-deterministic.

This is an attempt to make the OOM detection more deterministic and
easier to follow because each reclaimer basically tracks its own
progress which is implemented at the page allocator layer rather spread
out between the allocator and the reclaim. The more on the implementation
is described in the first patch.

I have tested several different scenarios but it should be clear that
testing OOM killer is quite hard to be representative. There is usually
a tiny gap between almost OOM and full blown OOM which is often time
sensitive. Anyway, I have tested the following 3 scenarios and I would
appreciate if there are more to test.

Testing environment: a virtual machine with 2G of RAM and 2CPUs without
any swap to make the OOM more deterministic.

1) 2 writers (each doing dd with 4M blocks to an xfs partition with 1G size,
   removes the files and starts over again) running in parallel for 10s
   to build up a lot of dirty pages when 100 parallel mem_eaters (anon
   private populated mmap which waits until it gets signal) with 80M
   each.

   This causes an OOM flood of course and I have compared both patched
   and unpatched kernels. The test is considered finished after there
   are no OOM conditions detected. This should tell us whether there are
   any excessive kills or some of them premature:

I have performed two runs this time each after a fresh boot.

* base kernel
$ grep "Killed process" base-oom-run1.log | tail -n1
[  211.824379] Killed process 3086 (mem_eater) total-vm:85852kB, anon-rss:81996kB, file-rss:332kB, shmem-rss:0kB
$ grep "Killed process" base-oom-run2.log | tail -n1
[  157.188326] Killed process 3094 (mem_eater) total-vm:85852kB, anon-rss:81996kB, file-rss:368kB, shmem-rss:0kB

$ grep "invoked oom-killer" base-oom-run1.log | wc -l
78
$ grep "invoked oom-killer" base-oom-run2.log | wc -l
76

The number of OOM invocations is consistent with my last measurements
but the runtime is way too different (it took 800+s). One thing that
could have skewed results was that I was tail -f the serial log on the
host system to see the progress. I have stopped doing that. The results
are more consistent now but still too different from the last time.
This is really weird so I've retested with the last 4.2 mmotm again and
I am getting consistent ~220s which is really close to the above. If I
apply the WQ vmstat patch on top I am getting close to 160s so the stale
vmstat counters made a difference which is to be expected. I have a new
SSD in my laptop which migh have made a difference but I wouldn't expect
it to be that large.

$ grep "DMA32.*all_unreclaimable? no" base-oom-run1.log | wc -l
4
$ grep "DMA32.*all_unreclaimable? no" base-oom-run2.log | wc -l
1

* patched kernel
$ grep "Killed process" patched-oom-run1.log | tail -n1
[  341.164930] Killed process 3099 (mem_eater) total-vm:85852kB, anon-rss:82000kB, file-rss:336kB, shmem-rss:0kB
$ grep "Killed process" patched-oom-run2.log | tail -n1
[  349.111539] Killed process 3082 (mem_eater) total-vm:85852kB, anon-rss:81996kB, file-rss:4kB, shmem-rss:0kB

$ grep "invoked oom-killer" patched-oom-run1.log | wc -l
78
$ grep "invoked oom-killer" patched-oom-run2.log | wc -l
77

$ grep "DMA32.*all_unreclaimable? no" patched-oom-run1.log | wc -l
1
$ grep "DMA32.*all_unreclaimable? no" patched-oom-run2.log | wc -l
0

So the number of OOM killer invocation is the same but the overall
runtime of the test was much longer with the patched kernel. This can be
attributed to more retries in general. The results from the base kernel
are quite inconsitent and I think that consistency is better here.


2) 2 writers again with 10s of run and then 10 mem_eaters to consume as much
   memory as possible without triggering the OOM killer. This required a lot
   of tuning but I've considered 3 consecutive runs without OOM as a success.

* base kernel
size=$(awk '/MemFree/{printf "%dK", ($2/10)-(15*1024)}' /proc/meminfo)

* patched kernel
size=$(awk '/MemFree/{printf "%dK", ($2/10)-(9*1024)}' /proc/meminfo)

It was -14M for the base 4.2 kernel and -7500M for the patched 4.2 kernel in
my last measurements.
The patched kernel handled the low mem conditions better and fired OOM
killer later.

3) Costly high-order allocations with a limited amount of memory.
   Start 10 memeaters in parallel each with
   size=$(awk '/MemTotal/{printf "%d\n", $2/10}' /proc/meminfo)
   This will cause an OOM killer which will kill one of them which will free up
   200M and then try to use all the remaining space for hugetlb pages. See how
   many of them will pass kill everything, wait 2s and try again.
   This tests whether we do not fail __GFP_REPEAT costly allocations too early
   now.
* base kernel
$ sort base-hugepages.log | uniq -c
      1 64
     13 65
      6 66
     20 Trying to allocate 73

* patched kernel
$ sort patched-hugepages.log | uniq -c
     17 65
      3 66
     20 Trying to allocate 73

This also doesn't look very bad but this particular test is quite timing
sensitive.

The above results do seem optimistic but more loads should be tested
obviously. I would really appreciate a feedback on the approach I have
chosen before I go into more tuning. Is this viable way to go?

[1] http://lkml.kernel.org/r/1447851840-15640-1-git-send-email-mhocko@kernel.org
[2] http://lkml.kernel.org/r/CA+55aFwapaED7JV6zm-NVkP-jKie+eQ1vDXWrKD=SkbshZSgmw@mail.gmail.com
[3] http://lkml.kernel.org/r/CA+55aFxwg=vS2nrXsQhAUzPQDGb8aQpZi0M7UUh21ftBo-z46Q@mail.gmail.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [RFC 1/3] mm, oom: refactor oom detection
  2015-12-01 12:56 [RFC 0/3] OOM detection rework v3 Michal Hocko
@ 2015-12-01 12:56 ` Michal Hocko
  2015-12-11 16:16   ` Johannes Weiner
  2015-12-01 12:56 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages Michal Hocko
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 24+ messages in thread
From: Michal Hocko @ 2015-12-01 12:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
	Michal Hocko

From: Michal Hocko <mhocko@suse.com>

__alloc_pages_slowpath has traditionally relied on the direct reclaim
and did_some_progress as an indicator that it makes sense to retry
allocation rather than declaring OOM. shrink_zones had to rely on
zone_reclaimable if shrink_zone didn't make any progress to prevent
from a premature OOM killer invocation - the LRU might be full of dirty
or writeback pages and direct reclaim cannot clean those up.

zone_reclaimable allows to rescan the reclaimable lists several
times and restart if a page is freed. This is really subtle behavior
and it might lead to a livelock when a single freed page keeps allocator
looping but the current task will not be able to allocate that single
page. OOM killer would be more appropriate than looping without any
progress for unbounded amount of time.

This patch changes OOM detection logic and pulls it out from shrink_zone
which is too low to be appropriate for any high level decisions such as OOM
which is per zonelist property. It is __alloc_pages_slowpath which knows
how many attempts have been done and what was the progress so far
therefore it is more appropriate to implement this logic.

The new heuristic tries to be more deterministic and easier to follow.
It builds on an assumption that retrying makes sense only if the
currently reclaimable memory + free pages would allow the current
allocation request to succeed (as per __zone_watermark_ok) at least for
one zone in the usable zonelist.

This alone wouldn't be sufficient, though, because the writeback might
get stuck and reclaimable pages might be pinned for a really long time
or even depend on the current allocation context. Therefore there is a
feedback mechanism implemented which reduces the reclaim target after
each reclaim round without any progress. This means that we should
eventually converge to only NR_FREE_PAGES as the target and fail on the
wmark check and proceed to OOM. The backoff is simple and linear with
1/16 of the reclaimable pages for each round without any progress. We
are optimistic and reset counter for successful reclaim rounds.

Costly high order pages mostly preserve their semantic and those without
__GFP_REPEAT fail right away while those which have the flag set will
back off after the amount of reclaimable pages reaches equivalent of the
requested order. The only difference is that if there was no progress
during the reclaim we rely on zone watermark check. This is more logical
thing to do than previous 1<<order attempts which were a result of
zone_reclaimable faking the progress.

[rientjes@google.com: use zone_page_state_snapshot for NR_FREE_PAGES]
[rientjes@google.com: shrink_zones doesn't need to return anything]
Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 include/linux/swap.h |  1 +
 mm/page_alloc.c      | 66 ++++++++++++++++++++++++++++++++++++++++++++++------
 mm/vmscan.c          | 25 ++++----------------
 3 files changed, 64 insertions(+), 28 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 457181844b6e..738ae2206635 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -316,6 +316,7 @@ extern void lru_cache_add_active_or_unevictable(struct page *page,
 						struct vm_area_struct *vma);
 
 /* linux/mm/vmscan.c */
+extern unsigned long zone_reclaimable_pages(struct zone *zone);
 extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
 					gfp_t gfp_mask, nodemask_t *mask);
 extern int __isolate_lru_page(struct page *page, isolate_mode_t mode);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e267faad4649..af221067de6a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2984,6 +2984,13 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
 	return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
 }
 
+/*
+ * Number of backoff steps for potentially reclaimable pages if the direct reclaim
+ * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
+ * reclaimable memory.
+ */
+#define MAX_STALL_BACKOFF 16
+
 static inline struct page *
 __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 						struct alloc_context *ac)
@@ -2996,6 +3003,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	enum migrate_mode migration_mode = MIGRATE_ASYNC;
 	bool deferred_compaction = false;
 	int contended_compaction = COMPACT_CONTENDED_NONE;
+	struct zone *zone;
+	struct zoneref *z;
+	int stall_backoff = 0;
 
 	/*
 	 * In the slowpath, we sanity check order to avoid ever trying to
@@ -3155,13 +3165,53 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	if (gfp_mask & __GFP_NORETRY)
 		goto noretry;
 
-	/* Keep reclaiming pages as long as there is reasonable progress */
+	/*
+	 * Do not retry high order allocations unless they are __GFP_REPEAT
+	 * and even then do not retry endlessly unless explicitly told so
+	 */
 	pages_reclaimed += did_some_progress;
-	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
-	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
-		/* Wait for some write requests to complete then retry */
-		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
-		goto retry;
+	if (order > PAGE_ALLOC_COSTLY_ORDER) {
+		if (!(gfp_mask & __GFP_NOFAIL) &&
+		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
+			goto noretry;
+
+		if (did_some_progress)
+			goto retry;
+	}
+
+	/*
+	 * Be optimistic and consider all pages on reclaimable LRUs as usable
+	 * but make sure we converge to OOM if we cannot make any progress after
+	 * multiple consecutive failed attempts.
+	 */
+	if (did_some_progress)
+		stall_backoff = 0;
+	else
+		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
+
+	/*
+	 * Keep reclaiming pages while there is a chance this will lead somewhere.
+	 * If none of the target zones can satisfy our allocation request even
+	 * if all reclaimable pages are considered then we are screwed and have
+	 * to go OOM.
+	 */
+	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
+		unsigned long free = zone_page_state_snapshot(zone, NR_FREE_PAGES);
+		unsigned long target;
+
+		target = zone_reclaimable_pages(zone);
+		target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
+		target += free;
+
+		/*
+		 * Would the allocation succeed if we reclaimed the whole target?
+		 */
+		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
+				ac->high_zoneidx, alloc_flags, target)) {
+			/* Wait for some write requests to complete then retry */
+			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
+			goto retry;
+		}
 	}
 
 	/* Reclaim has failed us, start killing things */
@@ -3170,8 +3220,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		goto got_pg;
 
 	/* Retry as long as the OOM killer is making progress */
-	if (did_some_progress)
+	if (did_some_progress) {
+		stall_backoff = 0;
 		goto retry;
+	}
 
 noretry:
 	/*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 4589cfdbe405..489212252cd6 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -192,7 +192,7 @@ static bool sane_reclaim(struct scan_control *sc)
 }
 #endif
 
-static unsigned long zone_reclaimable_pages(struct zone *zone)
+unsigned long zone_reclaimable_pages(struct zone *zone)
 {
 	unsigned long nr;
 
@@ -2516,10 +2516,8 @@ static inline bool compaction_ready(struct zone *zone, int order)
  *
  * If a zone is deemed to be full of pinned pages then just give it a light
  * scan then give up on it.
- *
- * Returns true if a zone was reclaimable.
  */
-static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
+static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 {
 	struct zoneref *z;
 	struct zone *zone;
@@ -2527,7 +2525,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 	unsigned long nr_soft_scanned;
 	gfp_t orig_mask;
 	enum zone_type requested_highidx = gfp_zone(sc->gfp_mask);
-	bool reclaimable = false;
 
 	/*
 	 * If the number of buffer_heads in the machine exceeds the maximum
@@ -2592,17 +2589,10 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 						&nr_soft_scanned);
 			sc->nr_reclaimed += nr_soft_reclaimed;
 			sc->nr_scanned += nr_soft_scanned;
-			if (nr_soft_reclaimed)
-				reclaimable = true;
 			/* need some check for avoid more shrink_zone() */
 		}
 
-		if (shrink_zone(zone, sc, zone_idx(zone) == classzone_idx))
-			reclaimable = true;
-
-		if (global_reclaim(sc) &&
-		    !reclaimable && zone_reclaimable(zone))
-			reclaimable = true;
+		shrink_zone(zone, sc, zone_idx(zone));
 	}
 
 	/*
@@ -2610,8 +2600,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
 	 * promoted it to __GFP_HIGHMEM.
 	 */
 	sc->gfp_mask = orig_mask;
-
-	return reclaimable;
 }
 
 /*
@@ -2636,7 +2624,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 	int initial_priority = sc->priority;
 	unsigned long total_scanned = 0;
 	unsigned long writeback_threshold;
-	bool zones_reclaimable;
 retry:
 	delayacct_freepages_start();
 
@@ -2647,7 +2634,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup,
 				sc->priority);
 		sc->nr_scanned = 0;
-		zones_reclaimable = shrink_zones(zonelist, sc);
+		shrink_zones(zonelist, sc);
 
 		total_scanned += sc->nr_scanned;
 		if (sc->nr_reclaimed >= sc->nr_to_reclaim)
@@ -2694,10 +2681,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 		goto retry;
 	}
 
-	/* Any of the zones still reclaimable?  Don't OOM. */
-	if (zones_reclaimable)
-		return 1;
-
 	return 0;
 }
 
-- 
2.6.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-12-01 12:56 [RFC 0/3] OOM detection rework v3 Michal Hocko
  2015-12-01 12:56 ` [RFC 1/3] mm, oom: refactor oom detection Michal Hocko
@ 2015-12-01 12:56 ` Michal Hocko
  2015-12-02  7:09   ` Hillf Danton
  2015-12-11 16:25   ` Johannes Weiner
  2015-12-01 12:56 ` [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations Michal Hocko
  2015-12-11  8:42 ` [RFC 0/3] OOM detection rework v3 Michal Hocko
  3 siblings, 2 replies; 24+ messages in thread
From: Michal Hocko @ 2015-12-01 12:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
	Michal Hocko

From: Michal Hocko <mhocko@suse.com>

wait_iff_congested has been used to throttle allocator before it retried
another round of direct reclaim to allow the writeback to make some
progress and prevent reclaim from looping over dirty/writeback pages
without making any progress. We used to do congestion_wait before
0e093d99763e ("writeback: do not sleep on the congestion queue if
there are no congested BDIs or if significant congestion is not being
encountered in the current zone") but that led to undesirable stalls
and sleeping for the full timeout even when the BDI wasn't congested.
Hence wait_iff_congested was used instead. But it seems that even
wait_iff_congested doesn't work as expected. We might have a small file
LRU list with all pages dirty/writeback and yet the bdi is not congested
so this is just a cond_resched in the end and can end up triggering pre
mature OOM.

This patch replaces the unconditional wait_iff_congested by
congestion_wait which is executed only if we _know_ that the last round
of direct reclaim didn't make any progress and dirty+writeback pages are
more than a half of the reclaimable pages on the zone which might be
usable for our target allocation. This shouldn't reintroduce stalls
fixed by 0e093d99763e because congestion_wait is called only when we
are getting hopeless when sleeping is a better choice than OOM with many
pages under IO.

We have to preserve logic introduced by "mm, vmstat: allow WQ concurrency
to discover memory reclaim doesn't make any progress" into the
__alloc_pages_slowpath now that wait_iff_congested is not used anymore.
As the only remaining user of wait_iff_congested is shrink_inactive_list
we can remove the WQ specific short sleep from wait_iff_congested
because the sleep is needed to be done only once in the allocation retry
cycle.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 mm/backing-dev.c | 19 +++----------------
 mm/page_alloc.c  | 33 ++++++++++++++++++++++++++++++---
 2 files changed, 33 insertions(+), 19 deletions(-)

diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 7340353f8aea..d2473ce9cc57 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -957,9 +957,8 @@ EXPORT_SYMBOL(congestion_wait);
  * jiffies for either a BDI to exit congestion of the given @sync queue
  * or a write to complete.
  *
- * In the absence of zone congestion, a short sleep or a cond_resched is
- * performed to yield the processor and to allow other subsystems to make
- * a forward progress.
+ * In the absence of zone congestion, cond_resched() is called to yield
+ * the processor if necessary but otherwise does not sleep.
  *
  * The return value is 0 if the sleep is for the full timeout. Otherwise,
  * it is the number of jiffies that were still remaining when the function
@@ -980,19 +979,7 @@ long wait_iff_congested(struct zone *zone, int sync, long timeout)
 	if (atomic_read(&nr_wb_congested[sync]) == 0 ||
 	    !test_bit(ZONE_CONGESTED, &zone->flags)) {
 
-		/*
-		 * Memory allocation/reclaim might be called from a WQ
-		 * context and the current implementation of the WQ
-		 * concurrency control doesn't recognize that a particular
-		 * WQ is congested if the worker thread is looping without
-		 * ever sleeping. Therefore we have to do a short sleep
-		 * here rather than calling cond_resched().
-		 */
-		if (current->flags & PF_WQ_WORKER)
-			schedule_timeout(1);
-		else
-			cond_resched();
-
+		cond_resched();
 		/* In case we scheduled, work out time remaining */
 		ret = timeout - (jiffies - start);
 		if (ret < 0)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index af221067de6a..168a675e9116 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3198,8 +3198,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
 		unsigned long free = zone_page_state_snapshot(zone, NR_FREE_PAGES);
 		unsigned long target;
+		unsigned long reclaimable;
 
-		target = zone_reclaimable_pages(zone);
+		reclaimable = target = zone_reclaimable_pages(zone);
 		target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
 		target += free;
 
@@ -3208,8 +3209,34 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		 */
 		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
 				ac->high_zoneidx, alloc_flags, target)) {
-			/* Wait for some write requests to complete then retry */
-			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
+			unsigned long writeback = zone_page_state_snapshot(zone, NR_WRITEBACK),
+				      dirty = zone_page_state_snapshot(zone, NR_FILE_DIRTY);
+
+			/*
+			 * If we didn't make any progress and have a lot of
+			 * dirty + writeback pages then we should wait for
+			 * an IO to complete to slow down the reclaim and
+			 * prevent from pre mature OOM
+			 */
+			if (!did_some_progress && 2*(writeback + dirty) > reclaimable) {
+				congestion_wait(BLK_RW_ASYNC, HZ/10);
+				goto retry;
+			}
+
+			/*
+			 * Memory allocation/reclaim might be called from a WQ
+			 * context and the current implementation of the WQ
+			 * concurrency control doesn't recognize that
+			 * a particular WQ is congested if the worker thread is
+			 * looping without ever sleeping. Therefore we have to
+			 * do a short sleep here rather than calling
+			 * cond_resched().
+			 */
+			if (current->flags & PF_WQ_WORKER)
+				schedule_timeout(1);
+			else
+				cond_resched();
+
 			goto retry;
 		}
 	}
-- 
2.6.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations
  2015-12-01 12:56 [RFC 0/3] OOM detection rework v3 Michal Hocko
  2015-12-01 12:56 ` [RFC 1/3] mm, oom: refactor oom detection Michal Hocko
  2015-12-01 12:56 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages Michal Hocko
@ 2015-12-01 12:56 ` Michal Hocko
  2015-12-02  7:07   ` Hillf Danton
  2015-12-11  8:42 ` [RFC 0/3] OOM detection rework v3 Michal Hocko
  3 siblings, 1 reply; 24+ messages in thread
From: Michal Hocko @ 2015-12-01 12:56 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
	Michal Hocko

From: Michal Hocko <mhocko@suse.com>

__alloc_pages_slowpath retries costly allocations until at least
order worth of pages were reclaimed or the watermark check for at least
one zone would succeed after all reclaiming all pages if the reclaim
hasn't made any progress.

The first condition was added by a41f24ea9fd6 ("page allocator: smarter
retry of costly-order allocations) and it assumed that lumpy reclaim
could have created a page of the sufficient order. Lumpy reclaim,
has been removed quite some time ago so the assumption doesn't hold
anymore. It would be more appropriate to check the compaction progress
instead but this patch simply removes the check and relies solely
on the watermark check.

To prevent from too many retries the stall_backoff is not reseted after
a reclaim which made progress because we cannot assume it helped high
order situation.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 mm/page_alloc.c | 20 ++++++++------------
 1 file changed, 8 insertions(+), 12 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 168a675e9116..45de14cd62f4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2998,7 +2998,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
 	struct page *page = NULL;
 	int alloc_flags;
-	unsigned long pages_reclaimed = 0;
 	unsigned long did_some_progress;
 	enum migrate_mode migration_mode = MIGRATE_ASYNC;
 	bool deferred_compaction = false;
@@ -3167,24 +3166,21 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 
 	/*
 	 * Do not retry high order allocations unless they are __GFP_REPEAT
-	 * and even then do not retry endlessly unless explicitly told so
+	 * unless explicitly told so.
 	 */
-	pages_reclaimed += did_some_progress;
-	if (order > PAGE_ALLOC_COSTLY_ORDER) {
-		if (!(gfp_mask & __GFP_NOFAIL) &&
-		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
-			goto noretry;
-
-		if (did_some_progress)
-			goto retry;
-	}
+	if (order > PAGE_ALLOC_COSTLY_ORDER &&
+			!(gfp_mask & (__GFP_REPEAT|__GFP_NOFAIL)))
+		goto noretry;
 
 	/*
 	 * Be optimistic and consider all pages on reclaimable LRUs as usable
 	 * but make sure we converge to OOM if we cannot make any progress after
 	 * multiple consecutive failed attempts.
+	 * Costly __GFP_REPEAT allocations might have made a progress but this
+	 * doesn't mean their order will become available due to high fragmentation
+	 * so do not reset the backoff for them
 	 */
-	if (did_some_progress)
+	if (did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER)
 		stall_backoff = 0;
 	else
 		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
-- 
2.6.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations
  2015-12-01 12:56 ` [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations Michal Hocko
@ 2015-12-02  7:07   ` Hillf Danton
  2015-12-02  8:52     ` Michal Hocko
  0 siblings, 1 reply; 24+ messages in thread
From: Hillf Danton @ 2015-12-02  7:07 UTC (permalink / raw)
  To: 'Michal Hocko', linux-mm
  Cc: 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'David Rientjes', 'Tetsuo Handa',
	'KAMEZAWA Hiroyuki', 'Michal Hocko'

> From: Michal Hocko <mhocko@suse.com>
> 
> __alloc_pages_slowpath retries costly allocations until at least
> order worth of pages were reclaimed or the watermark check for at least
> one zone would succeed after all reclaiming all pages if the reclaim
> hasn't made any progress.
> 
> The first condition was added by a41f24ea9fd6 ("page allocator: smarter
> retry of costly-order allocations) and it assumed that lumpy reclaim
> could have created a page of the sufficient order. Lumpy reclaim,
> has been removed quite some time ago so the assumption doesn't hold
> anymore. It would be more appropriate to check the compaction progress
> instead but this patch simply removes the check and relies solely
> on the watermark check.
> 
> To prevent from too many retries the stall_backoff is not reseted after
> a reclaim which made progress because we cannot assume it helped high
> order situation.
> 
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
>  mm/page_alloc.c | 20 ++++++++------------
>  1 file changed, 8 insertions(+), 12 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 168a675e9116..45de14cd62f4 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2998,7 +2998,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
>  	struct page *page = NULL;
>  	int alloc_flags;
> -	unsigned long pages_reclaimed = 0;
>  	unsigned long did_some_progress;
>  	enum migrate_mode migration_mode = MIGRATE_ASYNC;
>  	bool deferred_compaction = false;
> @@ -3167,24 +3166,21 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> 
>  	/*
>  	 * Do not retry high order allocations unless they are __GFP_REPEAT
> -	 * and even then do not retry endlessly unless explicitly told so
> +	 * unless explicitly told so.

s/unless/or/

Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>

>  	 */
> -	pages_reclaimed += did_some_progress;
> -	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> -		if (!(gfp_mask & __GFP_NOFAIL) &&
> -		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
> -			goto noretry;
> -
> -		if (did_some_progress)
> -			goto retry;
> -	}
> +	if (order > PAGE_ALLOC_COSTLY_ORDER &&
> +			!(gfp_mask & (__GFP_REPEAT|__GFP_NOFAIL)))
> +		goto noretry;
> 
>  	/*
>  	 * Be optimistic and consider all pages on reclaimable LRUs as usable
>  	 * but make sure we converge to OOM if we cannot make any progress after
>  	 * multiple consecutive failed attempts.
> +	 * Costly __GFP_REPEAT allocations might have made a progress but this
> +	 * doesn't mean their order will become available due to high fragmentation
> +	 * so do not reset the backoff for them
>  	 */
> -	if (did_some_progress)
> +	if (did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER)
>  		stall_backoff = 0;
>  	else
>  		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
> --
> 2.6.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-12-01 12:56 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages Michal Hocko
@ 2015-12-02  7:09   ` Hillf Danton
  2015-12-11 16:25   ` Johannes Weiner
  1 sibling, 0 replies; 24+ messages in thread
From: Hillf Danton @ 2015-12-02  7:09 UTC (permalink / raw)
  To: 'Michal Hocko', linux-mm
  Cc: 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'David Rientjes', 'Tetsuo Handa',
	'KAMEZAWA Hiroyuki', 'Michal Hocko'

> From: Michal Hocko <mhocko@suse.com>
> 
> wait_iff_congested has been used to throttle allocator before it retried
> another round of direct reclaim to allow the writeback to make some
> progress and prevent reclaim from looping over dirty/writeback pages
> without making any progress. We used to do congestion_wait before
> 0e093d99763e ("writeback: do not sleep on the congestion queue if
> there are no congested BDIs or if significant congestion is not being
> encountered in the current zone") but that led to undesirable stalls
> and sleeping for the full timeout even when the BDI wasn't congested.
> Hence wait_iff_congested was used instead. But it seems that even
> wait_iff_congested doesn't work as expected. We might have a small file
> LRU list with all pages dirty/writeback and yet the bdi is not congested
> so this is just a cond_resched in the end and can end up triggering pre
> mature OOM.
> 
> This patch replaces the unconditional wait_iff_congested by
> congestion_wait which is executed only if we _know_ that the last round
> of direct reclaim didn't make any progress and dirty+writeback pages are
> more than a half of the reclaimable pages on the zone which might be
> usable for our target allocation. This shouldn't reintroduce stalls
> fixed by 0e093d99763e because congestion_wait is called only when we
> are getting hopeless when sleeping is a better choice than OOM with many
> pages under IO.
> 
> We have to preserve logic introduced by "mm, vmstat: allow WQ concurrency
> to discover memory reclaim doesn't make any progress" into the
> __alloc_pages_slowpath now that wait_iff_congested is not used anymore.
> As the only remaining user of wait_iff_congested is shrink_inactive_list
> we can remove the WQ specific short sleep from wait_iff_congested
> because the sleep is needed to be done only once in the allocation retry
> cycle.
> 
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---

Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>

>  mm/backing-dev.c | 19 +++----------------
>  mm/page_alloc.c  | 33 ++++++++++++++++++++++++++++++---
>  2 files changed, 33 insertions(+), 19 deletions(-)
> 
> diff --git a/mm/backing-dev.c b/mm/backing-dev.c
> index 7340353f8aea..d2473ce9cc57 100644
> --- a/mm/backing-dev.c
> +++ b/mm/backing-dev.c
> @@ -957,9 +957,8 @@ EXPORT_SYMBOL(congestion_wait);
>   * jiffies for either a BDI to exit congestion of the given @sync queue
>   * or a write to complete.
>   *
> - * In the absence of zone congestion, a short sleep or a cond_resched is
> - * performed to yield the processor and to allow other subsystems to make
> - * a forward progress.
> + * In the absence of zone congestion, cond_resched() is called to yield
> + * the processor if necessary but otherwise does not sleep.
>   *
>   * The return value is 0 if the sleep is for the full timeout. Otherwise,
>   * it is the number of jiffies that were still remaining when the function
> @@ -980,19 +979,7 @@ long wait_iff_congested(struct zone *zone, int sync, long timeout)
>  	if (atomic_read(&nr_wb_congested[sync]) == 0 ||
>  	    !test_bit(ZONE_CONGESTED, &zone->flags)) {
> 
> -		/*
> -		 * Memory allocation/reclaim might be called from a WQ
> -		 * context and the current implementation of the WQ
> -		 * concurrency control doesn't recognize that a particular
> -		 * WQ is congested if the worker thread is looping without
> -		 * ever sleeping. Therefore we have to do a short sleep
> -		 * here rather than calling cond_resched().
> -		 */
> -		if (current->flags & PF_WQ_WORKER)
> -			schedule_timeout(1);
> -		else
> -			cond_resched();
> -
> +		cond_resched();
>  		/* In case we scheduled, work out time remaining */
>  		ret = timeout - (jiffies - start);
>  		if (ret < 0)
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index af221067de6a..168a675e9116 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3198,8 +3198,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
>  		unsigned long free = zone_page_state_snapshot(zone, NR_FREE_PAGES);
>  		unsigned long target;
> +		unsigned long reclaimable;
> 
> -		target = zone_reclaimable_pages(zone);
> +		reclaimable = target = zone_reclaimable_pages(zone);
>  		target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
>  		target += free;
> 
> @@ -3208,8 +3209,34 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  		 */
>  		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
>  				ac->high_zoneidx, alloc_flags, target)) {
> -			/* Wait for some write requests to complete then retry */
> -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> +			unsigned long writeback = zone_page_state_snapshot(zone, NR_WRITEBACK),
> +				      dirty = zone_page_state_snapshot(zone, NR_FILE_DIRTY);
> +
> +			/*
> +			 * If we didn't make any progress and have a lot of
> +			 * dirty + writeback pages then we should wait for
> +			 * an IO to complete to slow down the reclaim and
> +			 * prevent from pre mature OOM
> +			 */
> +			if (!did_some_progress && 2*(writeback + dirty) > reclaimable) {
> +				congestion_wait(BLK_RW_ASYNC, HZ/10);
> +				goto retry;
> +			}
> +
> +			/*
> +			 * Memory allocation/reclaim might be called from a WQ
> +			 * context and the current implementation of the WQ
> +			 * concurrency control doesn't recognize that
> +			 * a particular WQ is congested if the worker thread is
> +			 * looping without ever sleeping. Therefore we have to
> +			 * do a short sleep here rather than calling
> +			 * cond_resched().
> +			 */
> +			if (current->flags & PF_WQ_WORKER)
> +				schedule_timeout(1);
> +			else
> +				cond_resched();
> +
>  			goto retry;
>  		}
>  	}
> --
> 2.6.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations
  2015-12-02  7:07   ` Hillf Danton
@ 2015-12-02  8:52     ` Michal Hocko
  0 siblings, 0 replies; 24+ messages in thread
From: Michal Hocko @ 2015-12-02  8:52 UTC (permalink / raw)
  To: Hillf Danton
  Cc: linux-mm, 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'David Rientjes', 'Tetsuo Handa',
	'KAMEZAWA Hiroyuki'

On Wed 02-12-15 15:07:26, Hillf Danton wrote:
> > From: Michal Hocko <mhocko@suse.com>
> > 
> > __alloc_pages_slowpath retries costly allocations until at least
> > order worth of pages were reclaimed or the watermark check for at least
> > one zone would succeed after all reclaiming all pages if the reclaim
> > hasn't made any progress.
> > 
> > The first condition was added by a41f24ea9fd6 ("page allocator: smarter
> > retry of costly-order allocations) and it assumed that lumpy reclaim
> > could have created a page of the sufficient order. Lumpy reclaim,
> > has been removed quite some time ago so the assumption doesn't hold
> > anymore. It would be more appropriate to check the compaction progress
> > instead but this patch simply removes the check and relies solely
> > on the watermark check.
> > 
> > To prevent from too many retries the stall_backoff is not reseted after
> > a reclaim which made progress because we cannot assume it helped high
> > order situation.
> > 
> > Signed-off-by: Michal Hocko <mhocko@suse.com>
> > ---
> >  mm/page_alloc.c | 20 ++++++++------------
> >  1 file changed, 8 insertions(+), 12 deletions(-)
> > 
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 168a675e9116..45de14cd62f4 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -2998,7 +2998,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >  	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
> >  	struct page *page = NULL;
> >  	int alloc_flags;
> > -	unsigned long pages_reclaimed = 0;
> >  	unsigned long did_some_progress;
> >  	enum migrate_mode migration_mode = MIGRATE_ASYNC;
> >  	bool deferred_compaction = false;
> > @@ -3167,24 +3166,21 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> > 
> >  	/*
> >  	 * Do not retry high order allocations unless they are __GFP_REPEAT
> > -	 * and even then do not retry endlessly unless explicitly told so
> > +	 * unless explicitly told so.
> 
> s/unless/or/

Fixed
 
> Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>

Thanks!

> 
> >  	 */
> > -	pages_reclaimed += did_some_progress;
> > -	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> > -		if (!(gfp_mask & __GFP_NOFAIL) &&
> > -		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
> > -			goto noretry;
> > -
> > -		if (did_some_progress)
> > -			goto retry;
> > -	}
> > +	if (order > PAGE_ALLOC_COSTLY_ORDER &&
> > +			!(gfp_mask & (__GFP_REPEAT|__GFP_NOFAIL)))
> > +		goto noretry;
> > 
> >  	/*
> >  	 * Be optimistic and consider all pages on reclaimable LRUs as usable
> >  	 * but make sure we converge to OOM if we cannot make any progress after
> >  	 * multiple consecutive failed attempts.
> > +	 * Costly __GFP_REPEAT allocations might have made a progress but this
> > +	 * doesn't mean their order will become available due to high fragmentation
> > +	 * so do not reset the backoff for them
> >  	 */
> > -	if (did_some_progress)
> > +	if (did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER)
> >  		stall_backoff = 0;
> >  	else
> >  		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
> > --
> > 2.6.2
> 

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC 0/3] OOM detection rework v3
  2015-12-01 12:56 [RFC 0/3] OOM detection rework v3 Michal Hocko
                   ` (2 preceding siblings ...)
  2015-12-01 12:56 ` [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations Michal Hocko
@ 2015-12-11  8:42 ` Michal Hocko
  3 siblings, 0 replies; 24+ messages in thread
From: Michal Hocko @ 2015-12-11  8:42 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki

Hi,
are there any fundamental objections to the new approach? I am very well
aware that this is not a small change and it will take some time to
settle but can we move on and get this to mmotm tree (and linux-next) so
that it gets a larger test coverage. I do not think this is a material
for the next merge window. Maybe 4.6?

What do you think?
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-12-01 12:56 ` [RFC 1/3] mm, oom: refactor oom detection Michal Hocko
@ 2015-12-11 16:16   ` Johannes Weiner
  2015-12-14 18:34     ` Michal Hocko
  0 siblings, 1 reply; 24+ messages in thread
From: Johannes Weiner @ 2015-12-11 16:16 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
	Michal Hocko

On Tue, Dec 01, 2015 at 01:56:45PM +0100, Michal Hocko wrote:
> From: Michal Hocko <mhocko@suse.com>
> 
> __alloc_pages_slowpath has traditionally relied on the direct reclaim
> and did_some_progress as an indicator that it makes sense to retry
> allocation rather than declaring OOM. shrink_zones had to rely on
> zone_reclaimable if shrink_zone didn't make any progress to prevent
> from a premature OOM killer invocation - the LRU might be full of dirty
> or writeback pages and direct reclaim cannot clean those up.
> 
> zone_reclaimable allows to rescan the reclaimable lists several
> times and restart if a page is freed. This is really subtle behavior
> and it might lead to a livelock when a single freed page keeps allocator
> looping but the current task will not be able to allocate that single
> page. OOM killer would be more appropriate than looping without any
> progress for unbounded amount of time.
> 
> This patch changes OOM detection logic and pulls it out from shrink_zone
> which is too low to be appropriate for any high level decisions such as OOM
> which is per zonelist property. It is __alloc_pages_slowpath which knows
> how many attempts have been done and what was the progress so far
> therefore it is more appropriate to implement this logic.
> 
> The new heuristic tries to be more deterministic and easier to follow.
> It builds on an assumption that retrying makes sense only if the
> currently reclaimable memory + free pages would allow the current
> allocation request to succeed (as per __zone_watermark_ok) at least for
> one zone in the usable zonelist.
> 
> This alone wouldn't be sufficient, though, because the writeback might
> get stuck and reclaimable pages might be pinned for a really long time
> or even depend on the current allocation context. Therefore there is a
> feedback mechanism implemented which reduces the reclaim target after
> each reclaim round without any progress. This means that we should
> eventually converge to only NR_FREE_PAGES as the target and fail on the
> wmark check and proceed to OOM. The backoff is simple and linear with
> 1/16 of the reclaimable pages for each round without any progress. We
> are optimistic and reset counter for successful reclaim rounds.
> 
> Costly high order pages mostly preserve their semantic and those without
> __GFP_REPEAT fail right away while those which have the flag set will
> back off after the amount of reclaimable pages reaches equivalent of the
> requested order. The only difference is that if there was no progress
> during the reclaim we rely on zone watermark check. This is more logical
> thing to do than previous 1<<order attempts which were a result of
> zone_reclaimable faking the progress.
> 
> [rientjes@google.com: use zone_page_state_snapshot for NR_FREE_PAGES]
> [rientjes@google.com: shrink_zones doesn't need to return anything]
> Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
> Signed-off-by: Michal Hocko <mhocko@suse.com>

This makes sense to me and the patch looks good. Just a few nitpicks.

Could you change the word "refactor" in the title? This is not a
non-functional change.

> @@ -2984,6 +2984,13 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
>  	return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
>  }
>  
> +/*
> + * Number of backoff steps for potentially reclaimable pages if the direct reclaim
> + * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
> + * reclaimable memory.
> + */
> +#define MAX_STALL_BACKOFF 16

"stall backoff" is a fairly non-descript and doesn't give a good clue
at what exactly the variable is going to be doing.

How about MAX_DISCOUNT_RECLAIMABLE?

> @@ -3155,13 +3165,53 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	if (gfp_mask & __GFP_NORETRY)
>  		goto noretry;
>  
> -	/* Keep reclaiming pages as long as there is reasonable progress */
> +	/*
> +	 * Do not retry high order allocations unless they are __GFP_REPEAT
> +	 * and even then do not retry endlessly unless explicitly told so
> +	 */
>  	pages_reclaimed += did_some_progress;
> -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> -		/* Wait for some write requests to complete then retry */
> -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> -		goto retry;
> +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> +		if (!(gfp_mask & __GFP_NOFAIL) &&
> +		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
> +			goto noretry;
> +
> +		if (did_some_progress)
> +			goto retry;
> +	}

I'm a bit bothered by this change as goto noretry is not the inverse
of not doing goto retry: goto noretry jumps over _may_oom().

Of course, _may_oom() would filter a higher-order allocation anyway,
and we could say that it's such a fundamental concept that will never
change in the kernel that it's not a problem to repeat this clause
here. But you could probably say the same thing about not invoking OOM
for < ZONE_NORMAL, for !__GFP_FS, for __GFP_THISNODE, and I'm a bit
wary of these things spreading out of _may_oom() again after I just
put effort into consolidating all the OOM clauses in there.

It should be possible to keep the original branch and then nest the
decaying retry logic in there.

> +	/*
> +	 * Be optimistic and consider all pages on reclaimable LRUs as usable
> +	 * but make sure we converge to OOM if we cannot make any progress after
> +	 * multiple consecutive failed attempts.
> +	 */
> +	if (did_some_progress)
> +		stall_backoff = 0;
> +	else
> +		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);

The rest of the backoff logic would be nasty to shift out by another
tab, but it could easily live in its own function.

In fact, the longer I think about it, it would probably be better for
__alloc_pages_slowpath anyway as that zonelist walk looks a bit too
low-level and unwieldy for the highlevel control flow function.

The outer control flow could look something like this:

	/* Do not loop if specifically requested */
	if (gfp_mask & __GFP_NORETRY)
		goto noretry;

	/* Keep reclaiming pages as long as there is reasonable progress */
	if (did_some_progress) {
		pages_reclaimed += did_some_progress;
		no_progress_loops = 0;
	} else {
		no_progress_loops++;
	}
	if (should_retry_reclaim(gfp_mask, order, ac, did_some_progress,
				 no_progress_loops, pages_reclaimed)) {
		/* Wait for some write requests to complete then retry */
		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
		goto retry;
	}

	/* Reclaim has failed us, start killing things */
	page = __alloc_pages_may_oom(gfp_mask, order, ac, &did_some_progress);
	if (page)
		goto got_pg;

	/* Retry as long as the OOM killer is making progress */
	if (did_some_progress) {
		no_progress_loops = 0;
		goto retry;
	}

noretry:

> +	/*
> +	 * Keep reclaiming pages while there is a chance this will lead somewhere.
> +	 * If none of the target zones can satisfy our allocation request even
> +	 * if all reclaimable pages are considered then we are screwed and have
> +	 * to go OOM.
> +	 */
> +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> +		unsigned long free = zone_page_state_snapshot(zone, NR_FREE_PAGES);
> +		unsigned long target;
> +
> +		target = zone_reclaimable_pages(zone);
> +		target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
> +		target += free;

target is also a little non-descript. Maybe available?

		available += zone_reclaimable_pages(zone);
		available -= DIV_ROUND_UP(discount_reclaimable * available,
					  MAX_DISCOUNT_RECLAIMABLE);
		available += zone_page_state_snapshot(zone, NR_FREE_PAGES);

But yeah, this is mostly bikeshed territory now.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-12-01 12:56 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages Michal Hocko
  2015-12-02  7:09   ` Hillf Danton
@ 2015-12-11 16:25   ` Johannes Weiner
  1 sibling, 0 replies; 24+ messages in thread
From: Johannes Weiner @ 2015-12-11 16:25 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
	Michal Hocko

On Tue, Dec 01, 2015 at 01:56:46PM +0100, Michal Hocko wrote:
> From: Michal Hocko <mhocko@suse.com>
> 
> wait_iff_congested has been used to throttle allocator before it retried
> another round of direct reclaim to allow the writeback to make some
> progress and prevent reclaim from looping over dirty/writeback pages
> without making any progress. We used to do congestion_wait before
> 0e093d99763e ("writeback: do not sleep on the congestion queue if
> there are no congested BDIs or if significant congestion is not being
> encountered in the current zone") but that led to undesirable stalls
> and sleeping for the full timeout even when the BDI wasn't congested.
> Hence wait_iff_congested was used instead. But it seems that even
> wait_iff_congested doesn't work as expected. We might have a small file
> LRU list with all pages dirty/writeback and yet the bdi is not congested
> so this is just a cond_resched in the end and can end up triggering pre
> mature OOM.
> 
> This patch replaces the unconditional wait_iff_congested by
> congestion_wait which is executed only if we _know_ that the last round
> of direct reclaim didn't make any progress and dirty+writeback pages are
> more than a half of the reclaimable pages on the zone which might be
> usable for our target allocation. This shouldn't reintroduce stalls
> fixed by 0e093d99763e because congestion_wait is called only when we
> are getting hopeless when sleeping is a better choice than OOM with many
> pages under IO.
> 
> We have to preserve logic introduced by "mm, vmstat: allow WQ concurrency
> to discover memory reclaim doesn't make any progress" into the
> __alloc_pages_slowpath now that wait_iff_congested is not used anymore.
> As the only remaining user of wait_iff_congested is shrink_inactive_list
> we can remove the WQ specific short sleep from wait_iff_congested
> because the sleep is needed to be done only once in the allocation retry
> cycle.
> 
> Signed-off-by: Michal Hocko <mhocko@suse.com>

Yep, this looks like the right thing to do. However, the code it adds
to __alloc_pages_slowpath() is putting even more weight behind the
argument that the reclaim retry logic should be in its own function.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC 1/3] mm, oom: refactor oom detection
  2015-12-11 16:16   ` Johannes Weiner
@ 2015-12-14 18:34     ` Michal Hocko
  0 siblings, 0 replies; 24+ messages in thread
From: Michal Hocko @ 2015-12-14 18:34 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki

On Fri 11-12-15 11:16:15, Johannes Weiner wrote:
> On Tue, Dec 01, 2015 at 01:56:45PM +0100, Michal Hocko wrote:
> > From: Michal Hocko <mhocko@suse.com>
[...]
> This makes sense to me and the patch looks good. Just a few nitpicks.

Thanks for the review!

> Could you change the word "refactor" in the title? This is not a
> non-functional change.

Sure. I will go with rework.

> > @@ -2984,6 +2984,13 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
> >  	return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
> >  }
> >  
> > +/*
> > + * Number of backoff steps for potentially reclaimable pages if the direct reclaim
> > + * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
> > + * reclaimable memory.
> > + */
> > +#define MAX_STALL_BACKOFF 16
> 
> "stall backoff" is a fairly non-descript and doesn't give a good clue
> at what exactly the variable is going to be doing.

The idea was to reflect that this is a step in which we do a backoff
rather than absolute amount.

> How about MAX_DISCOUNT_RECLAIMABLE?

this would indicate an absolute value to me. What about MAX_RECLAIM_RETRIES?

> > @@ -3155,13 +3165,53 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >  	if (gfp_mask & __GFP_NORETRY)
> >  		goto noretry;
> >  
> > -	/* Keep reclaiming pages as long as there is reasonable progress */
> > +	/*
> > +	 * Do not retry high order allocations unless they are __GFP_REPEAT
> > +	 * and even then do not retry endlessly unless explicitly told so
> > +	 */
> >  	pages_reclaimed += did_some_progress;
> > -	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
> > -	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
> > -		/* Wait for some write requests to complete then retry */
> > -		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
> > -		goto retry;
> > +	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> > +		if (!(gfp_mask & __GFP_NOFAIL) &&
> > +		   (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order)))
> > +			goto noretry;
> > +
> > +		if (did_some_progress)
> > +			goto retry;
> > +	}
> 
> I'm a bit bothered by this change as goto noretry is not the inverse
> of not doing goto retry: goto noretry jumps over _may_oom().
> 
> Of course, _may_oom() would filter a higher-order allocation anyway,
> and we could say that it's such a fundamental concept that will never
> change in the kernel that it's not a problem to repeat this clause
> here. But you could probably say the same thing about not invoking OOM
> for < ZONE_NORMAL, for !__GFP_FS, for __GFP_THISNODE, and I'm a bit
> wary of these things spreading out of _may_oom() again after I just
> put effort into consolidating all the OOM clauses in there.

You are right. Our OOM rules are complex already and any partial rules
outside of _may_oom are adding to the confusion even more.

> It should be possible to keep the original branch and then nest the
> decaying retry logic in there.
> 
> > +	/*
> > +	 * Be optimistic and consider all pages on reclaimable LRUs as usable
> > +	 * but make sure we converge to OOM if we cannot make any progress after
> > +	 * multiple consecutive failed attempts.
> > +	 */
> > +	if (did_some_progress)
> > +		stall_backoff = 0;
> > +	else
> > +		stall_backoff = min(stall_backoff+1, MAX_STALL_BACKOFF);
> 
> The rest of the backoff logic would be nasty to shift out by another
> tab, but it could easily live in its own function.

Yeah, I'll go this way. It looks much better this way!

[...]
> > +	/*
> > +	 * Keep reclaiming pages while there is a chance this will lead somewhere.
> > +	 * If none of the target zones can satisfy our allocation request even
> > +	 * if all reclaimable pages are considered then we are screwed and have
> > +	 * to go OOM.
> > +	 */
> > +	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
> > +		unsigned long free = zone_page_state_snapshot(zone, NR_FREE_PAGES);
> > +		unsigned long target;
> > +
> > +		target = zone_reclaimable_pages(zone);
> > +		target -= DIV_ROUND_UP(stall_backoff * target, MAX_STALL_BACKOFF);
> > +		target += free;
> 
> target is also a little non-descript. Maybe available?
> 
> 		available += zone_reclaimable_pages(zone);
> 		available -= DIV_ROUND_UP(discount_reclaimable * available,
> 					  MAX_DISCOUNT_RECLAIMABLE);
> 		available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
> 
> But yeah, this is mostly bikeshed territory now.

Thanks for the feeback. Here is the cumulative diff after all my current
changes.
---
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e267faad4649..f77e283fb8c6 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2984,6 +2984,75 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
 	return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE;
 }
 
+/*
+ * Maximum number of reclaim retries without any progress before OOM killer
+ * is consider as the only way to move forward.
+ */
+#define MAX_RECLAIM_RETRIES 16
+
+/*
+ * Checks whether it makes sense to retry the reclaim to make a forward progress
+ * for the given allocation request.
+ * The reclaim feedback represented by did_some_progress (any progress during
+ * the last reclaim round), pages_reclaimed (cumulative number of reclaimed
+ * pages) and no_progress_loops (number of reclaim rounds without any progress
+ * in a row) is considered as well as the reclaimable pages on the applicable
+ * zone list (with a backoff mechanism which is a function of no_progress_loops).
+ *
+ * Returns true if a retry is viable or false to enter the oom path.
+ */
+static inline bool
+should_reclaim_retry(gfp_t gfp_mask, unsigned order,
+		     struct alloc_context *ac, int alloc_flags,
+		     bool did_some_progress, unsigned long pages_reclaimed,
+		     int no_progress_loops)
+{
+	struct zone *zone;
+	struct zoneref *z;
+
+	/*
+	 * Make sure we converge to OOM if we cannot make any progress
+	 * several times in the row.
+	 */
+	if (no_progress_loops > MAX_RECLAIM_RETRIES)
+		return false;
+
+	/* Do not retry high order allocations unless they are __GFP_REPEAT */
+	if (order > PAGE_ALLOC_COSTLY_ORDER) {
+		if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<<order))
+			return false;
+
+		if (did_some_progress)
+			return true;
+	}
+
+	/*
+	 * Keep reclaiming pages while there is a chance this will lead somewhere.
+	 * If none of the target zones can satisfy our allocation request even
+	 * if all reclaimable pages are considered then we are screwed and have
+	 * to go OOM.
+	 */
+	for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, ac->high_zoneidx, ac->nodemask) {
+		unsigned long available;
+
+		available = zone_reclaimable_pages(zone);
+		available -= DIV_ROUND_UP(no_progress_loops * available, MAX_RECLAIM_RETRIES);
+		available += zone_page_state_snapshot(zone, NR_FREE_PAGES);
+
+		/*
+		 * Would the allocation succeed if we reclaimed the whole available?
+		 */
+		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
+				ac->high_zoneidx, alloc_flags, available)) {
+			/* Wait for some write requests to complete then retry */
+			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
+			return true;
+		}
+	}
+
+	return false;
+}
+
 static inline struct page *
 __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 						struct alloc_context *ac)
@@ -2996,6 +3065,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	enum migrate_mode migration_mode = MIGRATE_ASYNC;
 	bool deferred_compaction = false;
 	int contended_compaction = COMPACT_CONTENDED_NONE;
+	int no_progress_loops = 0;
 
 	/*
 	 * In the slowpath, we sanity check order to avoid ever trying to
@@ -3155,23 +3225,28 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 	if (gfp_mask & __GFP_NORETRY)
 		goto noretry;
 
-	/* Keep reclaiming pages as long as there is reasonable progress */
-	pages_reclaimed += did_some_progress;
-	if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) ||
-	    ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) {
-		/* Wait for some write requests to complete then retry */
-		wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50);
-		goto retry;
+	if (did_some_progress) {
+		no_progress_loops = 0;
+		pages_reclaimed += did_some_progress;
+	} else {
+		no_progress_loops++;
 	}
 
+	if (should_reclaim_retry(gfp_mask, order, ac, alloc_flags,
+				 did_some_progress > 0, pages_reclaimed,
+				 no_progress_loops))
+		goto retry;
+
 	/* Reclaim has failed us, start killing things */
 	page = __alloc_pages_may_oom(gfp_mask, order, ac, &did_some_progress);
 	if (page)
 		goto got_pg;
 
 	/* Retry as long as the OOM killer is making progress */
-	if (did_some_progress)
+	if (did_some_progress) {
+		no_progress_loops = 0;
 		goto retry;
+	}
 
 noretry:
 	/*
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-11-19 23:12   ` David Rientjes
@ 2015-11-20  9:15     ` Michal Hocko
  0 siblings, 0 replies; 24+ messages in thread
From: Michal Hocko @ 2015-11-20  9:15 UTC (permalink / raw)
  To: David Rientjes
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki

On Thu 19-11-15 15:12:39, David Rientjes wrote:
> On Wed, 18 Nov 2015, Michal Hocko wrote:
> 
> > From: Michal Hocko <mhocko@suse.com>
> > 
> > wait_iff_congested has been used to throttle allocator before it retried
> > another round of direct reclaim to allow the writeback to make some
> > progress and prevent reclaim from looping over dirty/writeback pages
> > without making any progress. We used to do congestion_wait before
> > 0e093d99763e ("writeback: do not sleep on the congestion queue if
> > there are no congested BDIs or if significant congestion is not being
> > encountered in the current zone") but that led to undesirable stalls
> > and sleeping for the full timeout even when the BDI wasn't congested.
> > Hence wait_iff_congested was used instead. But it seems that even
> > wait_iff_congested doesn't work as expected. We might have a small file
> > LRU list with all pages dirty/writeback and yet the bdi is not congested
> > so this is just a cond_resched in the end and can end up triggering pre
> > mature OOM.
> > 
> > This patch replaces the unconditional wait_iff_congested by
> > congestion_wait which is executed only if we _know_ that the last round
> > of direct reclaim didn't make any progress and dirty+writeback pages are
> > more than a half of the reclaimable pages on the zone which might be
> > usable for our target allocation. This shouldn't reintroduce stalls
> > fixed by 0e093d99763e because congestion_wait is called only when we
> > are getting hopeless when sleeping is a better choice than OOM with many
> > pages under IO.
> > 
> 
> Why HZ/10 instead of HZ/50?

My idea was to give the writeback more time. As we only wait when there
is a lot of dirty/writeback data it shouldn't stall pointlessly.

> > Signed-off-by: Michal Hocko <mhocko@suse.com>
> > ---
> >  mm/page_alloc.c | 16 ++++++++++++++--
> >  1 file changed, 14 insertions(+), 2 deletions(-)
> > 
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 020c005c5bc0..e6271bc19e6a 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -3212,8 +3212,20 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >  		 */
> >  		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> >  				ac->high_zoneidx, alloc_flags, target)) {
> > -			/* Wait for some write requests to complete then retry */
> > -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> > +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> > +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> > +
> > +			/*
> > +			 * If we didn't make any progress and have a lot of
> > +			 * dirty + writeback pages then we should wait for
> > +			 * an IO to complete to slow down the reclaim and
> > +			 * prevent from pre mature OOM
> > +			 */
> > +			if (!did_some_progress && 2*(writeback + dirty) > reclaimable)
> > +				congestion_wait(BLK_RW_ASYNC, HZ/10);
> 
> The purpose of the heuristic seems logical, but I'm concerned about the 
> threshold for determining when to wait and when to just resched and retry 
> again.
> 
> This triggers for environments without swap when
> 
> 2 * (NR_WRITEBACK + NR_DIRTY) > (NR_ACTIVE_FILE + NR_INACTIVE_FILE +
> 				 NR_ISOLATED_FILE + NR_ISOLATED_ANON)
> 
>  [ The use of NR_ISOLATED_ANON in swapless is asked about in patch 1. ]
> 
> How exactly was this chosen?  Why not when the two sides equal each other?

The idea was to stall when the to-be-flushed pages form at least half of
the reclaimable memory which sounds like an easy concept to start with.
This worked reasonably well in my OOM stress tests but I am opened to
suggestions. Ideally this should be a function of the writeback speed
and maybe we will get there one day but I would like to start with
something simple which works most of the time.

> 
> > +			else
> > +				cond_resched();
> > +
> >  			goto retry;
> >  		}
> >  	}

Thanks
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-11-18 13:03 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages Michal Hocko
@ 2015-11-19 23:12   ` David Rientjes
  2015-11-20  9:15     ` Michal Hocko
  0 siblings, 1 reply; 24+ messages in thread
From: David Rientjes @ 2015-11-19 23:12 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
	Michal Hocko

On Wed, 18 Nov 2015, Michal Hocko wrote:

> From: Michal Hocko <mhocko@suse.com>
> 
> wait_iff_congested has been used to throttle allocator before it retried
> another round of direct reclaim to allow the writeback to make some
> progress and prevent reclaim from looping over dirty/writeback pages
> without making any progress. We used to do congestion_wait before
> 0e093d99763e ("writeback: do not sleep on the congestion queue if
> there are no congested BDIs or if significant congestion is not being
> encountered in the current zone") but that led to undesirable stalls
> and sleeping for the full timeout even when the BDI wasn't congested.
> Hence wait_iff_congested was used instead. But it seems that even
> wait_iff_congested doesn't work as expected. We might have a small file
> LRU list with all pages dirty/writeback and yet the bdi is not congested
> so this is just a cond_resched in the end and can end up triggering pre
> mature OOM.
> 
> This patch replaces the unconditional wait_iff_congested by
> congestion_wait which is executed only if we _know_ that the last round
> of direct reclaim didn't make any progress and dirty+writeback pages are
> more than a half of the reclaimable pages on the zone which might be
> usable for our target allocation. This shouldn't reintroduce stalls
> fixed by 0e093d99763e because congestion_wait is called only when we
> are getting hopeless when sleeping is a better choice than OOM with many
> pages under IO.
> 

Why HZ/10 instead of HZ/50?

> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
>  mm/page_alloc.c | 16 ++++++++++++++--
>  1 file changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 020c005c5bc0..e6271bc19e6a 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3212,8 +3212,20 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  		 */
>  		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
>  				ac->high_zoneidx, alloc_flags, target)) {
> -			/* Wait for some write requests to complete then retry */
> -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> +
> +			/*
> +			 * If we didn't make any progress and have a lot of
> +			 * dirty + writeback pages then we should wait for
> +			 * an IO to complete to slow down the reclaim and
> +			 * prevent from pre mature OOM
> +			 */
> +			if (!did_some_progress && 2*(writeback + dirty) > reclaimable)
> +				congestion_wait(BLK_RW_ASYNC, HZ/10);

The purpose of the heuristic seems logical, but I'm concerned about the 
threshold for determining when to wait and when to just resched and retry 
again.

This triggers for environments without swap when

2 * (NR_WRITEBACK + NR_DIRTY) > (NR_ACTIVE_FILE + NR_INACTIVE_FILE +
				 NR_ISOLATED_FILE + NR_ISOLATED_ANON)

 [ The use of NR_ISOLATED_ANON in swapless is asked about in patch 1. ]

How exactly was this chosen?  Why not when the two sides equal each other?

> +			else
> +				cond_resched();
> +
>  			goto retry;
>  		}
>  	}

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-11-18 13:03 [RFC 0/3] OOM detection rework v2 Michal Hocko
@ 2015-11-18 13:03 ` Michal Hocko
  2015-11-19 23:12   ` David Rientjes
  0 siblings, 1 reply; 24+ messages in thread
From: Michal Hocko @ 2015-11-18 13:03 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	David Rientjes, Tetsuo Handa, Hillf Danton, KAMEZAWA Hiroyuki,
	Michal Hocko

From: Michal Hocko <mhocko@suse.com>

wait_iff_congested has been used to throttle allocator before it retried
another round of direct reclaim to allow the writeback to make some
progress and prevent reclaim from looping over dirty/writeback pages
without making any progress. We used to do congestion_wait before
0e093d99763e ("writeback: do not sleep on the congestion queue if
there are no congested BDIs or if significant congestion is not being
encountered in the current zone") but that led to undesirable stalls
and sleeping for the full timeout even when the BDI wasn't congested.
Hence wait_iff_congested was used instead. But it seems that even
wait_iff_congested doesn't work as expected. We might have a small file
LRU list with all pages dirty/writeback and yet the bdi is not congested
so this is just a cond_resched in the end and can end up triggering pre
mature OOM.

This patch replaces the unconditional wait_iff_congested by
congestion_wait which is executed only if we _know_ that the last round
of direct reclaim didn't make any progress and dirty+writeback pages are
more than a half of the reclaimable pages on the zone which might be
usable for our target allocation. This shouldn't reintroduce stalls
fixed by 0e093d99763e because congestion_wait is called only when we
are getting hopeless when sleeping is a better choice than OOM with many
pages under IO.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 mm/page_alloc.c | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 020c005c5bc0..e6271bc19e6a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3212,8 +3212,20 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		 */
 		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
 				ac->high_zoneidx, alloc_flags, target)) {
-			/* Wait for some write requests to complete then retry */
-			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
+			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
+				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
+
+			/*
+			 * If we didn't make any progress and have a lot of
+			 * dirty + writeback pages then we should wait for
+			 * an IO to complete to slow down the reclaim and
+			 * prevent from pre mature OOM
+			 */
+			if (!did_some_progress && 2*(writeback + dirty) > reclaimable)
+				congestion_wait(BLK_RW_ASYNC, HZ/10);
+			else
+				cond_resched();
+
 			goto retry;
 		}
 	}
-- 
2.6.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-10-30  5:48     ` Kamezawa Hiroyuki
@ 2015-10-30  8:38       ` Michal Hocko
  -1 siblings, 0 replies; 24+ messages in thread
From: Michal Hocko @ 2015-10-30  8:38 UTC (permalink / raw)
  To: Kamezawa Hiroyuki
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Rik van Riel, David Rientjes, Tetsuo Handa,
	LKML

On Fri 30-10-15 14:48:40, KAMEZAWA Hiroyuki wrote:
[...]
> > @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >   		 */
> >   		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> >   				ac->high_zoneidx, alloc_flags, target)) {
> > -			/* Wait for some write requests to complete then retry */
> > -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> > +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> > +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> > +
> > +			if (did_some_progress)
> > +				goto retry;
> > +
> > +			/*
> > +			 * If we didn't make any progress and have a lot of
> > +			 * dirty + writeback pages then we should wait for
> > +			 * an IO to complete to slow down the reclaim and
> > +			 * prevent from pre mature OOM
> > +			 */
> > +			if (2*(writeback + dirty) > reclaimable)
> 
> Doesn't this add unnecessary latency if other zones have enough clean memory ?

We know we haven't made any progress the last reclaim round so any zone
with a clean memory is rather unlikely.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
@ 2015-10-30  8:38       ` Michal Hocko
  0 siblings, 0 replies; 24+ messages in thread
From: Michal Hocko @ 2015-10-30  8:38 UTC (permalink / raw)
  To: Kamezawa Hiroyuki
  Cc: linux-mm, Andrew Morton, Linus Torvalds, Mel Gorman,
	Johannes Weiner, Rik van Riel, David Rientjes, Tetsuo Handa,
	LKML

On Fri 30-10-15 14:48:40, KAMEZAWA Hiroyuki wrote:
[...]
> > @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >   		 */
> >   		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> >   				ac->high_zoneidx, alloc_flags, target)) {
> > -			/* Wait for some write requests to complete then retry */
> > -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> > +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> > +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> > +
> > +			if (did_some_progress)
> > +				goto retry;
> > +
> > +			/*
> > +			 * If we didn't make any progress and have a lot of
> > +			 * dirty + writeback pages then we should wait for
> > +			 * an IO to complete to slow down the reclaim and
> > +			 * prevent from pre mature OOM
> > +			 */
> > +			if (2*(writeback + dirty) > reclaimable)
> 
> Doesn't this add unnecessary latency if other zones have enough clean memory ?

We know we haven't made any progress the last reclaim round so any zone
with a clean memory is rather unlikely.

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-10-30  4:18     ` Hillf Danton
@ 2015-10-30  8:37       ` Michal Hocko
  -1 siblings, 0 replies; 24+ messages in thread
From: Michal Hocko @ 2015-10-30  8:37 UTC (permalink / raw)
  To: Hillf Danton
  Cc: linux-mm, 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML',
	Christoph Lameter

On Fri 30-10-15 12:18:50, Hillf Danton wrote:
> 
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >  		 */
> >  		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> >  				ac->high_zoneidx, alloc_flags, target)) {
> > -			/* Wait for some write requests to complete then retry */
> > -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> > +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> > +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> > +
> > +			if (did_some_progress)
> > +				goto retry;
> > +
> > +			/*
> > +			 * If we didn't make any progress and have a lot of
> > +			 * dirty + writeback pages then we should wait for
> > +			 * an IO to complete to slow down the reclaim and
> > +			 * prevent from pre mature OOM
> > +			 */
> > +			if (2*(writeback + dirty) > reclaimable)
> > +				congestion_wait(BLK_RW_ASYNC, HZ/10);
> > +			else
> > +				cond_resched();
> > +
> 
> Looks the vmstat updater issue is not addressed.

This is not the aim of this patch set.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
@ 2015-10-30  8:37       ` Michal Hocko
  0 siblings, 0 replies; 24+ messages in thread
From: Michal Hocko @ 2015-10-30  8:37 UTC (permalink / raw)
  To: Hillf Danton
  Cc: linux-mm, 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML',
	Christoph Lameter

On Fri 30-10-15 12:18:50, Hillf Danton wrote:
> 
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
> >  		 */
> >  		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> >  				ac->high_zoneidx, alloc_flags, target)) {
> > -			/* Wait for some write requests to complete then retry */
> > -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> > +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> > +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> > +
> > +			if (did_some_progress)
> > +				goto retry;
> > +
> > +			/*
> > +			 * If we didn't make any progress and have a lot of
> > +			 * dirty + writeback pages then we should wait for
> > +			 * an IO to complete to slow down the reclaim and
> > +			 * prevent from pre mature OOM
> > +			 */
> > +			if (2*(writeback + dirty) > reclaimable)
> > +				congestion_wait(BLK_RW_ASYNC, HZ/10);
> > +			else
> > +				cond_resched();
> > +
> 
> Looks the vmstat updater issue is not addressed.

This is not the aim of this patch set.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-10-29 15:17   ` mhocko
@ 2015-10-30  5:48     ` Kamezawa Hiroyuki
  -1 siblings, 0 replies; 24+ messages in thread
From: Kamezawa Hiroyuki @ 2015-10-30  5:48 UTC (permalink / raw)
  To: mhocko, linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML, Michal Hocko

On 2015/10/30 0:17, mhocko@kernel.org wrote:
> From: Michal Hocko <mhocko@suse.com>
> 
> wait_iff_congested has been used to throttle allocator before it retried
> another round of direct reclaim to allow the writeback to make some
> progress and prevent reclaim from looping over dirty/writeback pages
> without making any progress. We used to do congestion_wait before
> 0e093d99763e ("writeback: do not sleep on the congestion queue if
> there are no congested BDIs or if significant congestion is not being
> encountered in the current zone") but that led to undesirable stalls
> and sleeping for the full timeout even when the BDI wasn't congested.
> Hence wait_iff_congested was used instead. But it seems that even
> wait_iff_congested doesn't work as expected. We might have a small file
> LRU list with all pages dirty/writeback and yet the bdi is not congested
> so this is just a cond_resched in the end and can end up triggering pre
> mature OOM.
> 
> This patch replaces the unconditional wait_iff_congested by
> congestion_wait which is executed only if we _know_ that the last round
> of direct reclaim didn't make any progress and dirty+writeback pages are
> more than a half of the reclaimable pages on the zone which might be
> usable for our target allocation. This shouldn't reintroduce stalls
> fixed by 0e093d99763e because congestion_wait is called only when we
> are getting hopeless when sleeping is a better choice than OOM with many
> pages under IO.
> 
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
>   mm/page_alloc.c | 19 +++++++++++++++++--
>   1 file changed, 17 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 9c0abb75ad53..0518ca6a9776 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>   		 */
>   		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
>   				ac->high_zoneidx, alloc_flags, target)) {
> -			/* Wait for some write requests to complete then retry */
> -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> +
> +			if (did_some_progress)
> +				goto retry;
> +
> +			/*
> +			 * If we didn't make any progress and have a lot of
> +			 * dirty + writeback pages then we should wait for
> +			 * an IO to complete to slow down the reclaim and
> +			 * prevent from pre mature OOM
> +			 */
> +			if (2*(writeback + dirty) > reclaimable)

Doesn't this add unnecessary latency if other zones have enough clean memory ?


Thanks,
-Kame
 



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
@ 2015-10-30  5:48     ` Kamezawa Hiroyuki
  0 siblings, 0 replies; 24+ messages in thread
From: Kamezawa Hiroyuki @ 2015-10-30  5:48 UTC (permalink / raw)
  To: mhocko, linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML, Michal Hocko

On 2015/10/30 0:17, mhocko@kernel.org wrote:
> From: Michal Hocko <mhocko@suse.com>
> 
> wait_iff_congested has been used to throttle allocator before it retried
> another round of direct reclaim to allow the writeback to make some
> progress and prevent reclaim from looping over dirty/writeback pages
> without making any progress. We used to do congestion_wait before
> 0e093d99763e ("writeback: do not sleep on the congestion queue if
> there are no congested BDIs or if significant congestion is not being
> encountered in the current zone") but that led to undesirable stalls
> and sleeping for the full timeout even when the BDI wasn't congested.
> Hence wait_iff_congested was used instead. But it seems that even
> wait_iff_congested doesn't work as expected. We might have a small file
> LRU list with all pages dirty/writeback and yet the bdi is not congested
> so this is just a cond_resched in the end and can end up triggering pre
> mature OOM.
> 
> This patch replaces the unconditional wait_iff_congested by
> congestion_wait which is executed only if we _know_ that the last round
> of direct reclaim didn't make any progress and dirty+writeback pages are
> more than a half of the reclaimable pages on the zone which might be
> usable for our target allocation. This shouldn't reintroduce stalls
> fixed by 0e093d99763e because congestion_wait is called only when we
> are getting hopeless when sleeping is a better choice than OOM with many
> pages under IO.
> 
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
>   mm/page_alloc.c | 19 +++++++++++++++++--
>   1 file changed, 17 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 9c0abb75ad53..0518ca6a9776 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>   		 */
>   		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
>   				ac->high_zoneidx, alloc_flags, target)) {
> -			/* Wait for some write requests to complete then retry */
> -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> +
> +			if (did_some_progress)
> +				goto retry;
> +
> +			/*
> +			 * If we didn't make any progress and have a lot of
> +			 * dirty + writeback pages then we should wait for
> +			 * an IO to complete to slow down the reclaim and
> +			 * prevent from pre mature OOM
> +			 */
> +			if (2*(writeback + dirty) > reclaimable)

Doesn't this add unnecessary latency if other zones have enough clean memory ?


Thanks,
-Kame
 


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-10-29 15:17   ` mhocko
@ 2015-10-30  4:18     ` Hillf Danton
  -1 siblings, 0 replies; 24+ messages in thread
From: Hillf Danton @ 2015-10-30  4:18 UTC (permalink / raw)
  To: mhocko, linux-mm
  Cc: 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML', 'Michal Hocko',
	Christoph Lameter


> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  		 */
>  		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
>  				ac->high_zoneidx, alloc_flags, target)) {
> -			/* Wait for some write requests to complete then retry */
> -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> +
> +			if (did_some_progress)
> +				goto retry;
> +
> +			/*
> +			 * If we didn't make any progress and have a lot of
> +			 * dirty + writeback pages then we should wait for
> +			 * an IO to complete to slow down the reclaim and
> +			 * prevent from pre mature OOM
> +			 */
> +			if (2*(writeback + dirty) > reclaimable)
> +				congestion_wait(BLK_RW_ASYNC, HZ/10);
> +			else
> +				cond_resched();
> +

Looks the vmstat updater issue is not addressed.
>  			goto retry;
>  		}
>  	}
> --
> 2.6.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
@ 2015-10-30  4:18     ` Hillf Danton
  0 siblings, 0 replies; 24+ messages in thread
From: Hillf Danton @ 2015-10-30  4:18 UTC (permalink / raw)
  To: mhocko, linux-mm
  Cc: 'Andrew Morton', 'Linus Torvalds',
	'Mel Gorman', 'Johannes Weiner',
	'Rik van Riel', 'David Rientjes',
	'Tetsuo Handa', 'LKML', 'Michal Hocko',
	Christoph Lameter


> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  		 */
>  		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
>  				ac->high_zoneidx, alloc_flags, target)) {
> -			/* Wait for some write requests to complete then retry */
> -			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
> +			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
> +				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
> +
> +			if (did_some_progress)
> +				goto retry;
> +
> +			/*
> +			 * If we didn't make any progress and have a lot of
> +			 * dirty + writeback pages then we should wait for
> +			 * an IO to complete to slow down the reclaim and
> +			 * prevent from pre mature OOM
> +			 */
> +			if (2*(writeback + dirty) > reclaimable)
> +				congestion_wait(BLK_RW_ASYNC, HZ/10);
> +			else
> +				cond_resched();
> +

Looks the vmstat updater issue is not addressed.
>  			goto retry;
>  		}
>  	}
> --
> 2.6.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
  2015-10-29 15:17 RFC: OOM detection rework v1 mhocko
@ 2015-10-29 15:17   ` mhocko
  0 siblings, 0 replies; 24+ messages in thread
From: mhocko @ 2015-10-29 15:17 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML, Michal Hocko

From: Michal Hocko <mhocko@suse.com>

wait_iff_congested has been used to throttle allocator before it retried
another round of direct reclaim to allow the writeback to make some
progress and prevent reclaim from looping over dirty/writeback pages
without making any progress. We used to do congestion_wait before
0e093d99763e ("writeback: do not sleep on the congestion queue if
there are no congested BDIs or if significant congestion is not being
encountered in the current zone") but that led to undesirable stalls
and sleeping for the full timeout even when the BDI wasn't congested.
Hence wait_iff_congested was used instead. But it seems that even
wait_iff_congested doesn't work as expected. We might have a small file
LRU list with all pages dirty/writeback and yet the bdi is not congested
so this is just a cond_resched in the end and can end up triggering pre
mature OOM.

This patch replaces the unconditional wait_iff_congested by
congestion_wait which is executed only if we _know_ that the last round
of direct reclaim didn't make any progress and dirty+writeback pages are
more than a half of the reclaimable pages on the zone which might be
usable for our target allocation. This shouldn't reintroduce stalls
fixed by 0e093d99763e because congestion_wait is called only when we
are getting hopeless when sleeping is a better choice than OOM with many
pages under IO.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 mm/page_alloc.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9c0abb75ad53..0518ca6a9776 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		 */
 		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
 				ac->high_zoneidx, alloc_flags, target)) {
-			/* Wait for some write requests to complete then retry */
-			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
+			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
+				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
+
+			if (did_some_progress)
+				goto retry;
+
+			/*
+			 * If we didn't make any progress and have a lot of
+			 * dirty + writeback pages then we should wait for
+			 * an IO to complete to slow down the reclaim and
+			 * prevent from pre mature OOM
+			 */
+			if (2*(writeback + dirty) > reclaimable)
+				congestion_wait(BLK_RW_ASYNC, HZ/10);
+			else
+				cond_resched();
+
 			goto retry;
 		}
 	}
-- 
2.6.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages
@ 2015-10-29 15:17   ` mhocko
  0 siblings, 0 replies; 24+ messages in thread
From: mhocko @ 2015-10-29 15:17 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Linus Torvalds, Mel Gorman, Johannes Weiner,
	Rik van Riel, David Rientjes, Tetsuo Handa, LKML, Michal Hocko

From: Michal Hocko <mhocko@suse.com>

wait_iff_congested has been used to throttle allocator before it retried
another round of direct reclaim to allow the writeback to make some
progress and prevent reclaim from looping over dirty/writeback pages
without making any progress. We used to do congestion_wait before
0e093d99763e ("writeback: do not sleep on the congestion queue if
there are no congested BDIs or if significant congestion is not being
encountered in the current zone") but that led to undesirable stalls
and sleeping for the full timeout even when the BDI wasn't congested.
Hence wait_iff_congested was used instead. But it seems that even
wait_iff_congested doesn't work as expected. We might have a small file
LRU list with all pages dirty/writeback and yet the bdi is not congested
so this is just a cond_resched in the end and can end up triggering pre
mature OOM.

This patch replaces the unconditional wait_iff_congested by
congestion_wait which is executed only if we _know_ that the last round
of direct reclaim didn't make any progress and dirty+writeback pages are
more than a half of the reclaimable pages on the zone which might be
usable for our target allocation. This shouldn't reintroduce stalls
fixed by 0e093d99763e because congestion_wait is called only when we
are getting hopeless when sleeping is a better choice than OOM with many
pages under IO.

Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 mm/page_alloc.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9c0abb75ad53..0518ca6a9776 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
 		 */
 		if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
 				ac->high_zoneidx, alloc_flags, target)) {
-			/* Wait for some write requests to complete then retry */
-			wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50);
+			unsigned long writeback = zone_page_state(zone, NR_WRITEBACK),
+				      dirty = zone_page_state(zone, NR_FILE_DIRTY);
+
+			if (did_some_progress)
+				goto retry;
+
+			/*
+			 * If we didn't make any progress and have a lot of
+			 * dirty + writeback pages then we should wait for
+			 * an IO to complete to slow down the reclaim and
+			 * prevent from pre mature OOM
+			 */
+			if (2*(writeback + dirty) > reclaimable)
+				congestion_wait(BLK_RW_ASYNC, HZ/10);
+			else
+				cond_resched();
+
 			goto retry;
 		}
 	}
-- 
2.6.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2015-12-14 18:35 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-01 12:56 [RFC 0/3] OOM detection rework v3 Michal Hocko
2015-12-01 12:56 ` [RFC 1/3] mm, oom: refactor oom detection Michal Hocko
2015-12-11 16:16   ` Johannes Weiner
2015-12-14 18:34     ` Michal Hocko
2015-12-01 12:56 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages Michal Hocko
2015-12-02  7:09   ` Hillf Danton
2015-12-11 16:25   ` Johannes Weiner
2015-12-01 12:56 ` [RFC 3/3] mm: use watermak checks for __GFP_REPEAT high order allocations Michal Hocko
2015-12-02  7:07   ` Hillf Danton
2015-12-02  8:52     ` Michal Hocko
2015-12-11  8:42 ` [RFC 0/3] OOM detection rework v3 Michal Hocko
  -- strict thread matches above, loose matches on Subject: below --
2015-11-18 13:03 [RFC 0/3] OOM detection rework v2 Michal Hocko
2015-11-18 13:03 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages Michal Hocko
2015-11-19 23:12   ` David Rientjes
2015-11-20  9:15     ` Michal Hocko
2015-10-29 15:17 RFC: OOM detection rework v1 mhocko
2015-10-29 15:17 ` [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages mhocko
2015-10-29 15:17   ` mhocko
2015-10-30  4:18   ` Hillf Danton
2015-10-30  4:18     ` Hillf Danton
2015-10-30  8:37     ` Michal Hocko
2015-10-30  8:37       ` Michal Hocko
2015-10-30  5:48   ` Kamezawa Hiroyuki
2015-10-30  5:48     ` Kamezawa Hiroyuki
2015-10-30  8:38     ` Michal Hocko
2015-10-30  8:38       ` Michal Hocko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.