From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751907AbcF0Jqp (ORCPT ); Mon, 27 Jun 2016 05:46:45 -0400 Received: from mx2.suse.de ([195.135.220.15]:52016 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751590AbcF0Jqn (ORCPT ); Mon, 27 Jun 2016 05:46:43 -0400 Subject: Re: [PATCH v3 5/6] mm/cma: remove MIGRATE_CMA To: js1304@gmail.com, Andrew Morton References: <1464243748-16367-1-git-send-email-iamjoonsoo.kim@lge.com> <1464243748-16367-6-git-send-email-iamjoonsoo.kim@lge.com> Cc: Rik van Riel , Johannes Weiner , mgorman@techsingularity.net, Laura Abbott , Minchan Kim , Marek Szyprowski , Michal Nazarewicz , "Aneesh Kumar K.V" , Rui Teng , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Joonsoo Kim From: Vlastimil Babka Message-ID: <087368b2-19d3-30e0-e420-456c291f16c9@suse.cz> Date: Mon, 27 Jun 2016 11:46:39 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.1.1 MIME-Version: 1.0 In-Reply-To: <1464243748-16367-6-git-send-email-iamjoonsoo.kim@lge.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 05/26/2016 08:22 AM, js1304@gmail.com wrote: > From: Joonsoo Kim > > Now, all reserved pages for CMA region are belong to the ZONE_CMA > and there is no other type of pages. Therefore, we don't need to > use MIGRATE_CMA to distinguish and handle differently for CMA pages > and ordinary pages. Remove MIGRATE_CMA. > > Unfortunately, this patch make free CMA counter incorrect because > we count it when pages are on the MIGRATE_CMA. It will be fixed > by next patch. I can squash next patch here but it makes changes > complicated and hard to review so I separate that. Doesn't sound like a big deal. > Signed-off-by: Joonsoo Kim [...] > @@ -7442,14 +7401,14 @@ int alloc_contig_range(unsigned long start, unsigned long end, > * allocator removing them from the buddy system. This way > * page allocator will never consider using them. > * > - * This lets us mark the pageblocks back as > - * MIGRATE_CMA/MIGRATE_MOVABLE so that free pages in the > - * aligned range but not in the unaligned, original range are > - * put back to page allocator so that buddy can use them. > + * This lets us mark the pageblocks back as MIGRATE_MOVABLE > + * so that free pages in the aligned range but not in the > + * unaligned, original range are put back to page allocator > + * so that buddy can use them. > */ > > ret = start_isolate_page_range(pfn_max_align_down(start), > - pfn_max_align_up(end), migratetype, > + pfn_max_align_up(end), MIGRATE_MOVABLE, > false); > if (ret) > return ret; > @@ -7528,7 +7487,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, > > done: > undo_isolate_page_range(pfn_max_align_down(start), > - pfn_max_align_up(end), migratetype); > + pfn_max_align_up(end), MIGRATE_MOVABLE); > return ret; > } Looks like all callers of {start,undo}_isolate_page_range() now use MIGRATE_MOVABLE, so it could be removed.