linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Hillf Danton" <hillf.zj@alibaba-inc.com>
To: "'Michal Hocko'" <mhocko@kernel.org>,
	"'Andrew Morton'" <akpm@linux-foundation.org>
Cc: "'Linus Torvalds'" <torvalds@linux-foundation.org>,
	"'Johannes Weiner'" <hannes@cmpxchg.org>,
	"'Mel Gorman'" <mgorman@suse.de>,
	"'David Rientjes'" <rientjes@google.com>,
	"'Tetsuo Handa'" <penguin-kernel@I-love.SAKURA.ne.jp>,
	"'Joonsoo Kim'" <js1304@gmail.com>,
	"'Vlastimil Babka'" <vbabka@suse.cz>, <linux-mm@kvack.org>,
	"'LKML'" <linux-kernel@vger.kernel.org>,
	"'Michal Hocko'" <mhocko@suse.com>
Subject: Re: [PATCH 13/14] mm: consider compaction feedback also for costly allocation
Date: Thu, 21 Apr 2016 16:13:56 +0800	[thread overview]
Message-ID: <02f201d19ba5$c0c7e550$4257aff0$@alibaba-inc.com> (raw)
In-Reply-To: <1461181647-8039-14-git-send-email-mhocko@kernel.org>

> 
> From: Michal Hocko <mhocko@suse.com>
> 
> PAGE_ALLOC_COSTLY_ORDER retry logic is mostly handled inside
> should_reclaim_retry currently where we decide to not retry after at
> least order worth of pages were reclaimed or the watermark check for at
> least one zone would succeed after reclaiming all pages if the reclaim
> hasn't made any progress. Compaction feedback is mostly ignored and we
> just try to make sure that the compaction did at least something before
> giving up.
> 
> The first condition was added by a41f24ea9fd6 ("page allocator: smarter
> retry of costly-order allocations) and it assumed that lumpy reclaim
> could have created a page of the sufficient order. Lumpy reclaim,
> has been removed quite some time ago so the assumption doesn't hold
> anymore. Remove the check for the number of reclaimed pages and rely
> on the compaction feedback solely. should_reclaim_retry now only
> makes sure that we keep retrying reclaim for high order pages only
> if they are hidden by watermaks so order-0 reclaim makes really sense.
> 
> should_compact_retry now keeps retrying even for the costly allocations.
> The number of retries is reduced wrt. !costly requests because they are
> less important and harder to grant and so their pressure shouldn't cause
> contention for other requests or cause an over reclaim. We also do not
> reset no_progress_loops for costly request to make sure we do not keep
> reclaiming too agressively.
> 
> This has been tested by running a process which fragments memory:
> 	- compact memory
> 	- mmap large portion of the memory (1920M on 2GRAM machine with 2G
> 	  of swapspace)
> 	- MADV_DONTNEED single page in PAGE_SIZE*((1UL<<MAX_ORDER)-1)
> 	  steps until certain amount of memory is freed (250M in my test)
> 	  and reduce the step to (step / 2) + 1 after reaching the end of
> 	  the mapping
> 	- then run a script which populates the page cache 2G (MemTotal)
> 	  from /dev/zero to a new file
> And then tries to allocate
> nr_hugepages=$(awk '/MemAvailable/{printf "%d\n", $2/(2*1024)}' /proc/meminfo)
> huge pages.
> 
> root@test1:~# echo 1 > /proc/sys/vm/overcommit_memory;echo 1 > /proc/sys/vm/compact_memory; ./fragment-mem-and-run
> /root/alloc_hugepages.sh 1920M 250M
> Node 0, zone      DMA     31     28     31     10      2      0      2      1      2      3      1
> Node 0, zone    DMA32    437    319    171     50     28     25     20     16     16     14    437
> 
> * This is the /proc/buddyinfo after the compaction
> 
> Done fragmenting. size=2013265920 freed=262144000
> Node 0, zone      DMA    165     48      3      1      2      0      2      2      2      2      0
> Node 0, zone    DMA32  35109  14575    185     51     41     12      6      0      0      0      0
> 
> * /proc/buddyinfo after memory got fragmented
> 
> Executing "/root/alloc_hugepages.sh"
> Eating some pagecache
> 508623+0 records in
> 508623+0 records out
> 2083319808 bytes (2.1 GB) copied, 11.7292 s, 178 MB/s
> Node 0, zone      DMA      3      5      3      1      2      0      2      2      2      2      0
> Node 0, zone    DMA32    111    344    153     20     24     10      3      0      0      0      0
> 
> * /proc/buddyinfo after page cache got eaten
> 
> Trying to allocate 129
> 129
> 
> * 129 hugepages requested and all of them granted.
> 
> Node 0, zone      DMA      3      5      3      1      2      0      2      2      2      2      0
> Node 0, zone    DMA32    127     97     30     99     11      6      2      1      4      0      0
> 
> * /proc/buddyinfo after hugetlb allocation.
> 
> 10 runs will behave as follows:
> Trying to allocate 130
> 130
> --
> Trying to allocate 129
> 129
> --
> Trying to allocate 128
> 128
> --
> Trying to allocate 129
> 129
> --
> Trying to allocate 128
> 128
> --
> Trying to allocate 129
> 129
> --
> Trying to allocate 132
> 132
> --
> Trying to allocate 129
> 129
> --
> Trying to allocate 128
> 128
> --
> Trying to allocate 129
> 129
> 
> So basically 100% success for all 10 attempts.
> Without the patch numbers looked much worse:
> Trying to allocate 128
> 12
> --
> Trying to allocate 129
> 14
> --
> Trying to allocate 129
> 7
> --
> Trying to allocate 129
> 16
> --
> Trying to allocate 129
> 30
> --
> Trying to allocate 129
> 38
> --
> Trying to allocate 129
> 19
> --
> Trying to allocate 129
> 37
> --
> Trying to allocate 129
> 28
> --
> Trying to allocate 129
> 37
> 
> Just for completness the base kernel without oom detection rework looks
> as follows:
> Trying to allocate 127
> 30
> --
> Trying to allocate 129
> 12
> --
> Trying to allocate 129
> 52
> --
> Trying to allocate 128
> 32
> --
> Trying to allocate 129
> 12
> --
> Trying to allocate 129
> 10
> --
> Trying to allocate 129
> 32
> --
> Trying to allocate 128
> 14
> --
> Trying to allocate 128
> 16
> --
> Trying to allocate 129
> 8
> 
> As we can see the success rate is much more volatile and smaller without
> this patch. So the patch not only makes the retry logic for costly
> requests more sensible the success rate is even higher.
> 
> Acked-by: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---

Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>

>  mm/page_alloc.c | 63 +++++++++++++++++++++++++++++----------------------------
>  1 file changed, 32 insertions(+), 31 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index bb4df1be0d43..d5a938f12554 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3019,6 +3019,8 @@ should_compact_retry(unsigned int order, enum compact_result compact_result,
>  		     enum migrate_mode *migrate_mode,
>  		     int compaction_retries)
>  {
> +	int max_retries = MAX_COMPACT_RETRIES;
> +
>  	if (!order)
>  		return false;
> 
> @@ -3036,17 +3038,24 @@ should_compact_retry(unsigned int order, enum compact_result compact_result,
>  	}
> 
>  	/*
> -	 * !costly allocations are really important and we have to make sure
> -	 * the compaction wasn't deferred or didn't bail out early due to locks
> -	 * contention before we go OOM. Still cap the reclaim retry loops with
> -	 * progress to prevent from looping forever and potential trashing.
> +	 * make sure the compaction wasn't deferred or didn't bail out early
> +	 * due to locks contention before we declare that we should give up.
>  	 */
> -	if (order <= PAGE_ALLOC_COSTLY_ORDER) {
> -		if (compaction_withdrawn(compact_result))
> -			return true;
> -		if (compaction_retries <= MAX_COMPACT_RETRIES)
> -			return true;
> -	}
> +	if (compaction_withdrawn(compact_result))
> +		return true;
> +
> +	/*
> +	 * !costly requests are much more important than __GFP_REPEAT
> +	 * costly ones because they are de facto nofail and invoke OOM
> +	 * killer to move on while costly can fail and users are ready
> +	 * to cope with that. 1/4 retries is rather arbitrary but we
> +	 * would need much more detailed feedback from compaction to
> +	 * make a better decision.
> +	 */
> +	if (order > PAGE_ALLOC_COSTLY_ORDER)
> +		max_retries /= 4;
> +	if (compaction_retries <= max_retries)
> +		return true;
> 
>  	return false;
>  }
> @@ -3207,18 +3216,17 @@ static inline bool is_thp_gfp_mask(gfp_t gfp_mask)
>   * Checks whether it makes sense to retry the reclaim to make a forward progress
>   * for the given allocation request.
>   * The reclaim feedback represented by did_some_progress (any progress during
> - * the last reclaim round), pages_reclaimed (cumulative number of reclaimed
> - * pages) and no_progress_loops (number of reclaim rounds without any progress
> - * in a row) is considered as well as the reclaimable pages on the applicable
> - * zone list (with a backoff mechanism which is a function of no_progress_loops).
> + * the last reclaim round) and no_progress_loops (number of reclaim rounds without
> + * any progress in a row) is considered as well as the reclaimable pages on the
> + * applicable zone list (with a backoff mechanism which is a function of
> + * no_progress_loops).
>   *
>   * Returns true if a retry is viable or false to enter the oom path.
>   */
>  static inline bool
>  should_reclaim_retry(gfp_t gfp_mask, unsigned order,
>  		     struct alloc_context *ac, int alloc_flags,
> -		     bool did_some_progress, unsigned long pages_reclaimed,
> -		     int no_progress_loops)
> +		     bool did_some_progress, int no_progress_loops)
>  {
>  	struct zone *zone;
>  	struct zoneref *z;
> @@ -3230,14 +3238,6 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
>  	if (no_progress_loops > MAX_RECLAIM_RETRIES)
>  		return false;
> 
> -	if (order > PAGE_ALLOC_COSTLY_ORDER) {
> -		if (pages_reclaimed >= (1<<order))
> -			return false;
> -
> -		if (did_some_progress)
> -			return true;
> -	}
> -
>  	/*
>  	 * Keep reclaiming pages while there is a chance this will lead somewhere.
>  	 * If none of the target zones can satisfy our allocation request even
> @@ -3308,7 +3308,6 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
>  	struct page *page = NULL;
>  	int alloc_flags;
> -	unsigned long pages_reclaimed = 0;
>  	unsigned long did_some_progress;
>  	enum migrate_mode migration_mode = MIGRATE_ASYNC;
>  	enum compact_result compact_result;
> @@ -3444,16 +3443,18 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	if (order > PAGE_ALLOC_COSTLY_ORDER && !(gfp_mask & __GFP_REPEAT))
>  		goto noretry;
> 
> -	if (did_some_progress) {
> +	/*
> +	 * Costly allocations might have made a progress but this doesn't mean
> +	 * their order will become available due to high fragmentation so
> +	 * always increment the no progress counter for them
> +	 */
> +	if (did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER)
>  		no_progress_loops = 0;
> -		pages_reclaimed += did_some_progress;
> -	} else {
> +	else
>  		no_progress_loops++;
> -	}
> 
>  	if (should_reclaim_retry(gfp_mask, order, ac, alloc_flags,
> -				 did_some_progress > 0, pages_reclaimed,
> -				 no_progress_loops))
> +				 did_some_progress > 0, no_progress_loops))
>  		goto retry;
> 
>  	/*
> --
> 2.8.0.rc3

  reply	other threads:[~2016-04-21  8:14 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-20 19:47 [PATCH 0.14] oom detection rework v6 Michal Hocko
2016-04-20 19:47 ` [PATCH 01/14] vmscan: consider classzone_idx in compaction_ready Michal Hocko
2016-04-21  3:32   ` Hillf Danton
2016-05-04 13:56   ` Michal Hocko
2016-04-20 19:47 ` [PATCH 02/14] mm, compaction: change COMPACT_ constants into enum Michal Hocko
2016-04-20 19:47 ` [PATCH 03/14] mm, compaction: cover all compaction mode in compact_zone Michal Hocko
2016-04-20 19:47 ` [PATCH 04/14] mm, compaction: distinguish COMPACT_DEFERRED from COMPACT_SKIPPED Michal Hocko
2016-04-21  7:08   ` Hillf Danton
2016-04-20 19:47 ` [PATCH 05/14] mm, compaction: distinguish between full and partial COMPACT_COMPLETE Michal Hocko
2016-04-21  6:39   ` Hillf Danton
2016-04-20 19:47 ` [PATCH 06/14] mm, compaction: Update compaction_result ordering Michal Hocko
2016-04-21  6:45   ` Hillf Danton
2016-04-20 19:47 ` [PATCH 07/14] mm, compaction: Simplify __alloc_pages_direct_compact feedback interface Michal Hocko
2016-04-21  6:50   ` Hillf Danton
2016-04-20 19:47 ` [PATCH 08/14] mm, compaction: Abstract compaction feedback to helpers Michal Hocko
2016-04-21  6:57   ` Hillf Danton
2016-04-28  8:47   ` Vlastimil Babka
2016-04-20 19:47 ` [PATCH 09/14] mm: use compaction feedback for thp backoff conditions Michal Hocko
2016-04-21  7:05   ` Hillf Danton
2016-04-28  8:53   ` Vlastimil Babka
2016-04-28 12:35     ` Michal Hocko
2016-04-29  9:16       ` Vlastimil Babka
2016-04-29  9:28         ` Michal Hocko
2016-04-20 19:47 ` [PATCH 10/14] mm, oom: rework oom detection Michal Hocko
2016-04-20 19:47 ` [PATCH 11/14] mm: throttle on IO only when there are too many dirty and writeback pages Michal Hocko
2016-04-20 19:47 ` [PATCH 12/14] mm, oom: protect !costly allocations some more Michal Hocko
2016-04-21  8:03   ` Hillf Danton
2016-05-04  6:01   ` Joonsoo Kim
2016-05-04  6:31     ` Joonsoo Kim
2016-05-04  8:56       ` Michal Hocko
2016-05-04 14:57         ` Joonsoo Kim
2016-05-04 18:19           ` Michal Hocko
2016-05-04  8:53     ` Michal Hocko
2016-05-04 14:39       ` Joonsoo Kim
2016-05-04 18:20         ` Michal Hocko
2016-04-20 19:47 ` [PATCH 13/14] mm: consider compaction feedback also for costly allocation Michal Hocko
2016-04-21  8:13   ` Hillf Danton [this message]
2016-04-20 19:47 ` [PATCH 14/14] mm, oom, compaction: prevent from should_compact_retry looping for ever for costly orders Michal Hocko
2016-04-21  8:24   ` Hillf Danton
2016-04-28  8:59   ` Vlastimil Babka
2016-04-28 12:39     ` Michal Hocko
2016-05-04  6:27   ` Joonsoo Kim
2016-05-04  9:04     ` Michal Hocko
2016-05-04 15:14       ` Joonsoo Kim
2016-05-04 19:22         ` Michal Hocko
2016-05-04  5:45 ` [PATCH 0.14] oom detection rework v6 Joonsoo Kim
2016-05-04  8:12   ` Vlastimil Babka
2016-05-04  8:32     ` Joonsoo Kim
2016-05-04  8:50     ` Michal Hocko
2016-05-04  8:47   ` Michal Hocko
2016-05-04 14:32     ` Joonsoo Kim
2016-05-04 18:16       ` Michal Hocko
2016-05-10  6:41         ` Joonsoo Kim
2016-05-10  7:09           ` Vlastimil Babka
2016-05-10  8:00             ` Joonsoo Kim
2016-05-10  9:44               ` Michal Hocko
2016-05-10  9:43           ` Michal Hocko
2016-05-12  2:23             ` Joonsoo Kim
2016-05-12  5:19               ` Joonsoo Kim
2016-05-12 10:59               ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='02f201d19ba5$c0c7e550$4257aff0$@alibaba-inc.com' \
    --to=hillf.zj@alibaba-inc.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=js1304@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@kernel.org \
    --cc=mhocko@suse.com \
    --cc=penguin-kernel@I-love.SAKURA.ne.jp \
    --cc=rientjes@google.com \
    --cc=torvalds@linux-foundation.org \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).