From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934119AbcIVQAJ (ORCPT ); Thu, 22 Sep 2016 12:00:09 -0400 Received: from mx2.suse.de ([195.135.220.15]:49152 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753548AbcIVQAH (ORCPT ); Thu, 22 Sep 2016 12:00:07 -0400 Subject: Re: [PATCH v5 3/6] mm/cma: populate ZONE_CMA To: Joonsoo Kim References: <1472447255-10584-1-git-send-email-iamjoonsoo.kim@lge.com> <1472447255-10584-4-git-send-email-iamjoonsoo.kim@lge.com> <20160922054546.GC27958@js1304-P5Q-DELUXE> <20160922065048.GD27958@js1304-P5Q-DELUXE> Cc: Andrew Morton , Rik van Riel , Johannes Weiner , mgorman@techsingularity.net, Laura Abbott , Minchan Kim , Marek Szyprowski , Michal Nazarewicz , "Aneesh Kumar K.V" , linux-mm@kvack.org, linux-kernel@vger.kernel.org From: Vlastimil Babka Message-ID: <24b9e9e5-0580-b19d-9501-44a19555b4b7@suse.cz> Date: Thu, 22 Sep 2016 17:59:46 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.1.1 MIME-Version: 1.0 In-Reply-To: <20160922065048.GD27958@js1304-P5Q-DELUXE> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/22/2016 08:50 AM, Joonsoo Kim wrote: > On Thu, Sep 22, 2016 at 02:45:46PM +0900, Joonsoo Kim wrote: >> > >> > > /* Free whole pageblock and set its migration type to MIGRATE_CMA. */ >> > > void __init init_cma_reserved_pageblock(struct page *page) >> > > { >> > > unsigned i = pageblock_nr_pages; >> > >+ unsigned long pfn = page_to_pfn(page); >> > > struct page *p = page; >> > >+ int nid = page_to_nid(page); >> > >+ >> > >+ /* >> > >+ * ZONE_CMA will steal present pages from other zones by changing >> > >+ * page links so page_zone() is changed. Before that, >> > >+ * we need to adjust previous zone's page count first. >> > >+ */ >> > >+ adjust_present_page_count(page, -pageblock_nr_pages); >> > > >> > > do { >> > > __ClearPageReserved(p); >> > > set_page_count(p, 0); >> > >- } while (++p, --i); >> > >+ >> > >+ /* Steal pages from other zones */ >> > >+ set_page_links(p, ZONE_CMA, nid, pfn); >> > >+ } while (++p, ++pfn, --i); >> > >+ >> > >+ adjust_present_page_count(page, pageblock_nr_pages); >> > >> > This seems to assign pages to ZONE_CMA on the proper node, which is >> > good. But then ZONE_CMA on multiple nodes will have unnecessary >> > holes in the spanned pages, as each will contain only a subset. >> >> True, I will fix it and respin the series. > > I now realize that it's too late to send full series for next > merge window. I will send full series after next merge window is closed. I think there might still be rc8 thus another week. > Anyway, I'd like to confirm that following incremental patch will solve > your concern. Yeah that should work, as long as single cma areas don't include multiple nodes? > Thanks. > > > ------>8-------------- > mm/cma.c | 25 ++++++++++++++++--------- > 1 file changed, 16 insertions(+), 9 deletions(-) > > diff --git a/mm/cma.c b/mm/cma.c > index d69bdf7..8375554 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -146,22 +146,29 @@ static int __init cma_init_reserved_areas(void) > { > int i; > struct zone *zone; > - unsigned long start_pfn = UINT_MAX, end_pfn = 0; > + pg_data_t *pgdat; > > if (!cma_area_count) > return 0; > > - for (i = 0; i < cma_area_count; i++) { > - if (start_pfn > cma_areas[i].base_pfn) > - start_pfn = cma_areas[i].base_pfn; > - if (end_pfn < cma_areas[i].base_pfn + cma_areas[i].count) > - end_pfn = cma_areas[i].base_pfn + cma_areas[i].count; > - } > + for_each_online_pgdat(pgdat) { > + unsigned long start_pfn = UINT_MAX, end_pfn = 0; > > - for_each_zone(zone) { > - if (!is_zone_cma(zone)) > + for (i = 0; i < cma_area_count; i++) { > + if (page_to_nid(pfn_to_page(cma_areas[i].base_pfn)) != We have pfn_to_nid() (although the implementation is just like this). > + pgdat->node_id) > + continue; > + > + start_pfn = min(start_pfn, cma_areas[i].base_pfn); > + end_pfn = max(end_pfn, cma_areas[i].base_pfn + > + cma_areas[i].count); > + } > + > + if (!end_pfn) > continue; > > + zone = &pgdat->node_zones[ZONE_CMA]; > + > /* ZONE_CMA doesn't need to exceed CMA region */ > zone->zone_start_pfn = max(zone->zone_start_pfn, start_pfn); > zone->spanned_pages = min(zone_end_pfn(zone), end_pfn) - >