linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Minchan Kim <minchan@kernel.org>
To: Rik van Riel <riel@surriel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	kernel-team@fb.com, Roman Gushchin <guro@fb.com>,
	Qian Cai <cai@lca.pw>, Vlastimil Babka <vbabka@suse.cz>,
	Mel Gorman <mgorman@techsingularity.net>,
	Anshuman Khandual <anshuman.khandual@arm.com>
Subject: Re: [PATCH]  mm,page_alloc,cma: conditionally prefer cma pageblocks for movable allocations
Date: Fri, 20 Mar 2020 17:49:16 -0700	[thread overview]
Message-ID: <20200321004608.GA172976@google.com> (raw)
In-Reply-To: <20200306150102.3e77354b@imladris.surriel.com>

On Fri, Mar 06, 2020 at 03:01:02PM -0500, Rik van Riel wrote:
> Posting this one for Roman so I can deal with any upstream feedback and
> create a v2 if needed, while scratching my head over the next piece of
> this puzzle :)
> 
> ---8<---
> 
> From: Roman Gushchin <guro@fb.com>
> 
> Currently a cma area is barely used by the page allocator because
> it's used only as a fallback from movable, however kswapd tries
> hard to make sure that the fallback path isn't used.
> 
> This results in a system evicting memory and pushing data into swap,
> while lots of CMA memory is still available. This happens despite the
> fact that alloc_contig_range is perfectly capable of moving any movable
> allocations out of the way of an allocation.
> 
> To effectively use the cma area let's alter the rules: if the zone
> has more free cma pages than the half of total free pages in the zone,
> use cma pageblocks first and fallback to movable blocks in the case of
> failure.
> 
> Signed-off-by: Rik van Riel <riel@surriel.com>
> Co-developed-by: Rik van Riel <riel@surriel.com>
> Signed-off-by: Roman Gushchin <guro@fb.com>
> ---
>  mm/page_alloc.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 3c4eb750a199..0fb3c1719625 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2711,6 +2711,18 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype,
>  {
>  	struct page *page;
>  
> +	/*
> +	 * Balance movable allocations between regular and CMA areas by
> +	 * allocating from CMA when over half of the zone's free memory
> +	 * is in the CMA area.
> +	 */
> +	if (migratetype == MIGRATE_MOVABLE &&
> +	    zone_page_state(zone, NR_FREE_CMA_PAGES) >
> +	    zone_page_state(zone, NR_FREE_PAGES) / 2) {

Can't we move the check to caller so that only one atomic operation
per pcp refill?

rmqueue_bulk:
    spin_lock(zone->lock);
    cma_first = FREE_CMA > FREE_PAGE / 2;
    for (i, i < count; ++i) {
        __rmqueue(zone, order, migratetype, alloc_flags, cma_first);
    }

As a long term solution, I am looking forward to seeing cma zone
approach but this is also good as stop-gap solution.
Actually, in the android, vendors have used their customization to
make CMA area utilization high(i.e., CMA first and then movable)
but more restricted allocation pathes. So, I really want to see
this patch in upstream to make CMA utilization higher. A good side
about this patch is quite simple.

About the CMA allocation failure ratio, there is no good idea
to solve the issue perfectly. Even we go with cma zone approach,
it could happen. If so, I'd like to expose the symptom more
aggressively so that we could hear the pain and find the solution
actively rather than relying on luck.

Thus,
Acked-by: Minchan Kim <minchan@kernel.org>


      parent reply	other threads:[~2020-03-21  0:49 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-06 20:01 [PATCH] mm,page_alloc,cma: conditionally prefer cma pageblocks for movable allocations Rik van Riel
2020-03-07 22:38 ` Andrew Morton
2020-03-08 13:23   ` Rik van Riel
2020-03-11 17:58     ` Vlastimil Babka
2020-03-11 22:58       ` Roman Gushchin
2020-03-11 23:03         ` Vlastimil Babka
2020-03-11 23:21           ` Roman Gushchin
2020-03-11  8:51 ` Vlastimil Babka
2020-03-11 10:13   ` Joonsoo Kim
2020-03-11 17:41     ` Vlastimil Babka
2020-03-11 17:35   ` Roman Gushchin
2020-03-12  1:41     ` Joonsoo Kim
2020-03-12  2:39       ` Roman Gushchin
2020-03-12  8:56         ` Joonsoo Kim
2020-03-12 17:07           ` Roman Gushchin
2020-03-13  7:44             ` Joonsoo Kim
2020-04-02  2:13       ` Andrew Morton
2020-04-02  2:53         ` Roman Gushchin
2020-04-02  5:43           ` Joonsoo Kim
2020-04-02 19:42             ` Roman Gushchin
2020-04-03  4:34               ` Joonsoo Kim
2020-04-03 17:50                 ` Roman Gushchin
2020-04-02  3:05         ` Roman Gushchin
2020-03-21  0:49 ` Minchan Kim [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200321004608.GA172976@google.com \
    --to=minchan@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=cai@lca.pw \
    --cc=guro@fb.com \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=riel@surriel.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).