From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755582AbcIVFkK (ORCPT ); Thu, 22 Sep 2016 01:40:10 -0400 Received: from LGEAMRELO13.lge.com ([156.147.23.53]:48195 "EHLO lgeamrelo13.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753761AbcIVFkI (ORCPT ); Thu, 22 Sep 2016 01:40:08 -0400 X-Original-SENDERIP: 156.147.1.121 X-Original-MAILFROM: iamjoonsoo.kim@lge.com X-Original-SENDERIP: 10.177.222.138 X-Original-MAILFROM: iamjoonsoo.kim@lge.com Date: Thu, 22 Sep 2016 14:45:46 +0900 From: Joonsoo Kim To: Vlastimil Babka Cc: Andrew Morton , Rik van Riel , Johannes Weiner , mgorman@techsingularity.net, Laura Abbott , Minchan Kim , Marek Szyprowski , Michal Nazarewicz , "Aneesh Kumar K.V" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v5 3/6] mm/cma: populate ZONE_CMA Message-ID: <20160922054546.GC27958@js1304-P5Q-DELUXE> References: <1472447255-10584-1-git-send-email-iamjoonsoo.kim@lge.com> <1472447255-10584-4-git-send-email-iamjoonsoo.kim@lge.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 21, 2016 at 11:20:11AM +0200, Vlastimil Babka wrote: > On 08/29/2016 07:07 AM, js1304@gmail.com wrote: > >From: Joonsoo Kim > > > >Until now, reserved pages for CMA are managed in the ordinary zones > >where page's pfn are belong to. This approach has numorous problems > >and fixing them isn't easy. (It is mentioned on previous patch.) > >To fix this situation, ZONE_CMA is introduced in previous patch, but, > >not yet populated. This patch implement population of ZONE_CMA > >by stealing reserved pages from the ordinary zones. > > > >Unlike previous implementation that kernel allocation request with > >__GFP_MOVABLE could be serviced from CMA region, allocation request only > >with GFP_HIGHUSER_MOVABLE can be serviced from CMA region in the new > >approach. This is an inevitable design decision to use the zone > >implementation because ZONE_CMA could contain highmem. Due to this > >decision, ZONE_CMA will work like as ZONE_HIGHMEM or ZONE_MOVABLE. > > > >I don't think it would be a problem because most of file cache pages > >and anonymous pages are requested with GFP_HIGHUSER_MOVABLE. It could > >be proved by the fact that there are many systems with ZONE_HIGHMEM and > >they work fine. Notable disadvantage is that we cannot use these pages > >for blockdev file cache page, because it usually has __GFP_MOVABLE but > >not __GFP_HIGHMEM and __GFP_USER. But, in this case, there is pros and > >cons. In my experience, blockdev file cache pages are one of the top > >reason that causes cma_alloc() to fail temporarily. So, we can get more > >guarantee of cma_alloc() success by discarding that case. > > > >Implementation itself is very easy to understand. Steal when cma area is > >initialized and recalculate various per zone stat/threshold. > > > >Signed-off-by: Joonsoo Kim > > ... > > >@@ -145,6 +145,28 @@ err: > > static int __init cma_init_reserved_areas(void) > > { > > int i; > >+ struct zone *zone; > >+ unsigned long start_pfn = UINT_MAX, end_pfn = 0; > >+ > >+ if (!cma_area_count) > >+ return 0; > >+ > >+ for (i = 0; i < cma_area_count; i++) { > >+ if (start_pfn > cma_areas[i].base_pfn) > >+ start_pfn = cma_areas[i].base_pfn; > >+ if (end_pfn < cma_areas[i].base_pfn + cma_areas[i].count) > >+ end_pfn = cma_areas[i].base_pfn + cma_areas[i].count; > >+ } > >+ > >+ for_each_zone(zone) { > >+ if (!is_zone_cma(zone)) > >+ continue; > >+ > >+ /* ZONE_CMA doesn't need to exceed CMA region */ > >+ zone->zone_start_pfn = max(zone->zone_start_pfn, start_pfn); > >+ zone->spanned_pages = min(zone_end_pfn(zone), end_pfn) - > >+ zone->zone_start_pfn; > >+ } > > Hmm, so what happens on a system with multiple nodes? Each will have > its own ZONE_CMA, and all will have the same start pfn and spanned > pages? Each of zone_start_pfn and spanned_pages are initialized in calculate_node_totalpages() which considers node boundary. So, they will have not the same start pfn and spanned pages. However, each would contain unnecessary holes. > > > /* Free whole pageblock and set its migration type to MIGRATE_CMA. */ > > void __init init_cma_reserved_pageblock(struct page *page) > > { > > unsigned i = pageblock_nr_pages; > >+ unsigned long pfn = page_to_pfn(page); > > struct page *p = page; > >+ int nid = page_to_nid(page); > >+ > >+ /* > >+ * ZONE_CMA will steal present pages from other zones by changing > >+ * page links so page_zone() is changed. Before that, > >+ * we need to adjust previous zone's page count first. > >+ */ > >+ adjust_present_page_count(page, -pageblock_nr_pages); > > > > do { > > __ClearPageReserved(p); > > set_page_count(p, 0); > >- } while (++p, --i); > >+ > >+ /* Steal pages from other zones */ > >+ set_page_links(p, ZONE_CMA, nid, pfn); > >+ } while (++p, ++pfn, --i); > >+ > >+ adjust_present_page_count(page, pageblock_nr_pages); > > This seems to assign pages to ZONE_CMA on the proper node, which is > good. But then ZONE_CMA on multiple nodes will have unnecessary > holes in the spanned pages, as each will contain only a subset. True, I will fix it and respin the series. Thanks.