From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757598AbbJ2PRm (ORCPT ); Thu, 29 Oct 2015 11:17:42 -0400 Received: from mail-wi0-f176.google.com ([209.85.212.176]:36939 "EHLO mail-wi0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757573AbbJ2PRc (ORCPT ); Thu, 29 Oct 2015 11:17:32 -0400 From: mhocko@kernel.org To: Cc: Andrew Morton , Linus Torvalds , Mel Gorman , Johannes Weiner , Rik van Riel , David Rientjes , Tetsuo Handa , LKML , Michal Hocko Subject: [RFC 1/3] mm, oom: refactor oom detection Date: Thu, 29 Oct 2015 16:17:13 +0100 Message-Id: <1446131835-3263-2-git-send-email-mhocko@kernel.org> X-Mailer: git-send-email 2.6.1 In-Reply-To: <1446131835-3263-1-git-send-email-mhocko@kernel.org> References: <1446131835-3263-1-git-send-email-mhocko@kernel.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Michal Hocko __alloc_pages_slowpath has traditionally relied on the direct reclaim and did_some_progress as an indicator that it makes sense to retry allocation rather than declaring OOM. shrink_zones had to rely on zone_reclaimable if shrink_zone didn't make any progress to prevent from pre mature OOM killer invocation - the LRU might be full of dirty or writeback pages and direct reclaim cannot clean those up. zone_reclaimable will allow to rescan the reclaimable lists several times and restart if a page is freed. This is really subtle behavior and it might lead to a livelock when a single freed page keeps allocator looping but the current task will not be able to allocate that single page. OOM killer would be more appropriate than looping without any progress for unbounded amount of time. This patch changes OOM detection logic and pulls it out from shrink_zone which is too low to be appropriate for any high level decisions such as OOM which is per zonelist property. It is __alloc_pages_slowpath which knows how many attempts have been done and what was the progress so far therefore it is more appropriate to implement this logic. The new heuristic tries to be more deterministic and easier to follow. Retrying makes sense only if the currently reclaimable memory + free pages would allow the current allocation request to succeed (as per __zone_watermark_ok) at least for one zone in the usable zonelist. This alone wouldn't be sufficient, though, because the writeback might get stuck and reclaimable pages might be pinned for a really long time or even depend on the current allocation context. Therefore there is a feedback mechanism implemented which reduces the reclaim target after each reclaim round without any progress. This means that we should eventually converge to only NR_FREE_PAGES as the target and fail on the wmark check and proceed to OOM. The backoff is simple and linear with 1/16 of the reclaimable pages for each round without any progress. We are optimistic and reset counter for successful reclaim rounds. Costly high order pages mostly preserve their semantic and those without __GFP_REPEAT fail right away while those which have the flag set will back off after the amount of reclaimable pages reaches equivalent of the requested order. The only difference is that if there was no progress during the reclaim we rely on zone watermark check. This is more logical thing to do than previous 1< --- include/linux/swap.h | 1 + mm/page_alloc.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++------ mm/vmscan.c | 10 +------- 3 files changed, 64 insertions(+), 16 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 9c7c4b418498..8298e1dc20f9 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -317,6 +317,7 @@ extern void lru_cache_add_active_or_unevictable(struct page *page, struct vm_area_struct *vma); /* linux/mm/vmscan.c */ +extern unsigned long zone_reclaimable_pages(struct zone *zone); extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order, gfp_t gfp_mask, nodemask_t *mask); extern int __isolate_lru_page(struct page *page, isolate_mode_t mode); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c73913648357..9c0abb75ad53 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2972,6 +2972,13 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask) return (gfp_mask & (GFP_TRANSHUGE | __GFP_KSWAPD_RECLAIM)) == GFP_TRANSHUGE; } +/* + * Number of backoff steps for potentially reclaimable pages if the direct reclaim + * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the + * reclaimable memory. + */ +#define MAX_STALL_BACKOFF 16 + static inline struct page * __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, struct alloc_context *ac) @@ -2984,6 +2991,9 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, enum migrate_mode migration_mode = MIGRATE_ASYNC; bool deferred_compaction = false; int contended_compaction = COMPACT_CONTENDED_NONE; + struct zone *zone; + struct zoneref *z; + int stall_backoff = 0; /* * In the slowpath, we sanity check order to avoid ever trying to @@ -3135,13 +3145,56 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, if (gfp_mask & __GFP_NORETRY) goto noretry; - /* Keep reclaiming pages as long as there is reasonable progress */ + /* + * Do not retry high order allocations unless they are __GFP_REPEAT + * and even then do not retry endlessly. + */ pages_reclaimed += did_some_progress; - if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) || - ((gfp_mask & __GFP_REPEAT) && pages_reclaimed < (1 << order))) { - /* Wait for some write requests to complete then retry */ - wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50); - goto retry; + if (order > PAGE_ALLOC_COSTLY_ORDER) { + if (!(gfp_mask & __GFP_REPEAT) || pages_reclaimed >= (1<zonelist, ac->high_zoneidx, ac->nodemask) { + unsigned long free = zone_page_state(zone, NR_FREE_PAGES); + unsigned long reclaimable; + unsigned long target; + + reclaimable = zone_reclaimable_pages(zone) + + zone_page_state(zone, NR_ISOLATED_FILE) + + zone_page_state(zone, NR_ISOLATED_ANON); + target = reclaimable; + target -= stall_backoff * (1 + target/MAX_STALL_BACKOFF); + target += free; + + /* + * Would the allocation succeed if we reclaimed the whole target? + */ + if (__zone_watermark_ok(zone, order, min_wmark_pages(zone), + ac->high_zoneidx, alloc_flags, target)) { + /* Wait for some write requests to complete then retry */ + wait_iff_congested(zone, BLK_RW_ASYNC, HZ/50); + goto retry; + } } /* Reclaim has failed us, start killing things */ @@ -3150,8 +3203,10 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, goto got_pg; /* Retry as long as the OOM killer is making progress */ - if (did_some_progress) + if (did_some_progress) { + stall_backoff = 0; goto retry; + } noretry: /* diff --git a/mm/vmscan.c b/mm/vmscan.c index c88d74ad9304..bc14217acd47 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -193,7 +193,7 @@ static bool sane_reclaim(struct scan_control *sc) } #endif -static unsigned long zone_reclaimable_pages(struct zone *zone) +unsigned long zone_reclaimable_pages(struct zone *zone) { unsigned long nr; @@ -2639,10 +2639,6 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc) if (shrink_zone(zone, sc, zone_idx(zone) == classzone_idx)) reclaimable = true; - - if (global_reclaim(sc) && - !reclaimable && zone_reclaimable(zone)) - reclaimable = true; } /* @@ -2734,10 +2730,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist, goto retry; } - /* Any of the zones still reclaimable? Don't OOM. */ - if (zones_reclaimable) - return 1; - return 0; } -- 2.6.1