From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.7 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, FSL_HELO_FAKE,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7ECC5C4332B for ; Sat, 21 Mar 2020 00:49:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 240412051A for ; Sat, 21 Mar 2020 00:49:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="kVhapPyh" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 240412051A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9AED26B0003; Fri, 20 Mar 2020 20:49:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 985056B0005; Fri, 20 Mar 2020 20:49:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C2756B0007; Fri, 20 Mar 2020 20:49:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0198.hostedemail.com [216.40.44.198]) by kanga.kvack.org (Postfix) with ESMTP id 755046B0003 for ; Fri, 20 Mar 2020 20:49:21 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 354EE181AEF09 for ; Sat, 21 Mar 2020 00:49:21 +0000 (UTC) X-FDA: 76617535722.29.chalk39_6ebcd1a382e19 X-HE-Tag: chalk39_6ebcd1a382e19 X-Filterd-Recvd-Size: 6140 Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Sat, 21 Mar 2020 00:49:20 +0000 (UTC) Received: by mail-pg1-f196.google.com with SMTP id z72so3921306pgz.3 for ; Fri, 20 Mar 2020 17:49:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=3hKa7LLOLuovj0zSAadO4DLSIWvtOsmfSVgYwWw1DQE=; b=kVhapPyhhlS7a3c/Kl12nm/bpaj2L3ZRzwM27chDYk8hvA47Jc1jE1RSzbk7CHaRSd WGdieTWZ9+/f4tVx4SPfSCvwmb6MKz9dfAKr55RcXRIKl6Okj9oYp6U+3AdJaFMsPytc yfs94+DUDXpQKPZ3CEuyqGuUMP8Yq8qhlhpmq65At5ETZ8Kg3KZii/WF2cTHb4RTcvLV 6ANInkkT0rHl8jT9NKMB9TmakwCY1zGMh4YP0WDMhw+zjrOX9oFqdVXPkxe8mH92VR/B H4tIYjvBFW6hrNDjg41PT1ffohl7Er15It6/L1gu22YF2M27pYrapzWdt/6go2AgWplZ OIow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to:user-agent; bh=3hKa7LLOLuovj0zSAadO4DLSIWvtOsmfSVgYwWw1DQE=; b=ICNd7Vs7ORW8bP5Sgwl5beLc7NXlwihhQ3Ybn/VCDovltUeHl7qkR+zAsSim7OWg1Z 8jqVlnZ8IRbvf+WGe6uYHiGzNCRE3JM+ffIt/MuTMu2NNQWwhsADUd0pX5x4T4vrhat2 gf4Y1U3GtF7l2f39ds/jQ+cQYYsd30zWlOo4xDiiCN2FrogL2gIDv9zMpEjfaVPFTrio 2YXFBu1i7qozwNx83oDNb5oUDDsRUr8spITMeEjmsd+1Gi2i/z7BHBOI4gkh+ei63KZM 0elAArg5mFb/vtSLit2NiUn/aXBUJ5UjR5RXhKNd/tne4EkLqvnRajA8qhKws1dk2luR qKVw== X-Gm-Message-State: ANhLgQ3mMuLTTZpEvb9hvMEDPe6h46eX5bQ7yZkDOkGbSni/w3ZPisba iY2+M4V3cFCnpcouLuKCbmE= X-Google-Smtp-Source: ADFU+vsOPDWfJHSjmNrg4xTOMhPjHA6ft6fZW+KEqoqpdYD3TjSBUPV22qdD7ljU+OCQ3wnasFONrA== X-Received: by 2002:a63:715b:: with SMTP id b27mr11280969pgn.275.1584751759638; Fri, 20 Mar 2020 17:49:19 -0700 (PDT) Received: from google.com ([2601:647:4001:3000::50e3]) by smtp.gmail.com with ESMTPSA id w19sm6397870pgm.27.2020.03.20.17.49.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Mar 2020 17:49:18 -0700 (PDT) Date: Fri, 20 Mar 2020 17:49:16 -0700 From: Minchan Kim To: Rik van Riel Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@fb.com, Roman Gushchin , Qian Cai , Vlastimil Babka , Mel Gorman , Anshuman Khandual Subject: Re: [PATCH] mm,page_alloc,cma: conditionally prefer cma pageblocks for movable allocations Message-ID: <20200321004608.GA172976@google.com> References: <20200306150102.3e77354b@imladris.surriel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200306150102.3e77354b@imladris.surriel.com> User-Agent: Mutt/1.12.2 (2019-09-21) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Mar 06, 2020 at 03:01:02PM -0500, Rik van Riel wrote: > Posting this one for Roman so I can deal with any upstream feedback and > create a v2 if needed, while scratching my head over the next piece of > this puzzle :) > > ---8<--- > > From: Roman Gushchin > > Currently a cma area is barely used by the page allocator because > it's used only as a fallback from movable, however kswapd tries > hard to make sure that the fallback path isn't used. > > This results in a system evicting memory and pushing data into swap, > while lots of CMA memory is still available. This happens despite the > fact that alloc_contig_range is perfectly capable of moving any movable > allocations out of the way of an allocation. > > To effectively use the cma area let's alter the rules: if the zone > has more free cma pages than the half of total free pages in the zone, > use cma pageblocks first and fallback to movable blocks in the case of > failure. > > Signed-off-by: Rik van Riel > Co-developed-by: Rik van Riel > Signed-off-by: Roman Gushchin > --- > mm/page_alloc.c | 12 ++++++++++++ > 1 file changed, 12 insertions(+) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 3c4eb750a199..0fb3c1719625 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -2711,6 +2711,18 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, > { > struct page *page; > > + /* > + * Balance movable allocations between regular and CMA areas by > + * allocating from CMA when over half of the zone's free memory > + * is in the CMA area. > + */ > + if (migratetype == MIGRATE_MOVABLE && > + zone_page_state(zone, NR_FREE_CMA_PAGES) > > + zone_page_state(zone, NR_FREE_PAGES) / 2) { Can't we move the check to caller so that only one atomic operation per pcp refill? rmqueue_bulk: spin_lock(zone->lock); cma_first = FREE_CMA > FREE_PAGE / 2; for (i, i < count; ++i) { __rmqueue(zone, order, migratetype, alloc_flags, cma_first); } As a long term solution, I am looking forward to seeing cma zone approach but this is also good as stop-gap solution. Actually, in the android, vendors have used their customization to make CMA area utilization high(i.e., CMA first and then movable) but more restricted allocation pathes. So, I really want to see this patch in upstream to make CMA utilization higher. A good side about this patch is quite simple. About the CMA allocation failure ratio, there is no good idea to solve the issue perfectly. Even we go with cma zone approach, it could happen. If so, I'd like to expose the symptom more aggressively so that we could hear the pain and find the solution actively rather than relying on luck. Thus, Acked-by: Minchan Kim