From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_2 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A6B4C10F25 for ; Fri, 6 Mar 2020 22:06:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 15760206CC for ; Fri, 6 Mar 2020 22:06:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 15760206CC Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=surriel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A29E76B0003; Fri, 6 Mar 2020 17:06:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9B2A26B0006; Fri, 6 Mar 2020 17:06:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8A1BF6B0007; Fri, 6 Mar 2020 17:06:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0098.hostedemail.com [216.40.44.98]) by kanga.kvack.org (Postfix) with ESMTP id 7219A6B0003 for ; Fri, 6 Mar 2020 17:06:56 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 3868D2C8F for ; Fri, 6 Mar 2020 22:06:56 +0000 (UTC) X-FDA: 76566323232.22.van16_3c8445c1b6d07 X-HE-Tag: van16_3c8445c1b6d07 X-Filterd-Recvd-Size: 3855 Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Fri, 6 Mar 2020 22:06:55 +0000 (UTC) Received: from [2603:3005:d05:2b00:6e0b:84ff:fee2:98bb] (helo=imladris.surriel.com) by shelob.surriel.com with esmtpsa (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.92.3) (envelope-from ) id 1jAL7H-0002Ia-Sx; Fri, 06 Mar 2020 17:06:47 -0500 Date: Fri, 6 Mar 2020 17:06:47 -0500 From: Rik van Riel To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@fb.com, Anshuman Khandual , Mel Gorman , Vlastimil Babka , Qian Cai , Roman Gushchin Subject: [PATCH] mm,cma: remove pfn_range_valid_contig Message-ID: <20200306170647.455a2db3@imladris.surriel.com> X-Mailer: Claws Mail 3.17.4 (GTK+ 2.24.32; x86_64-redhat-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The function pfn_range_valid_contig checks whether all memory in the target area is free. This causes unnecessary CMA failures, since alloc_contig_range will migrate movable memory out of a target range, and has its own sanity check early on in has_unmovable_pages, which is called from start_isolate_page_range & set_migrate_type_isolate. Relying on that has_unmovable_pages call simplifies the CMA code and results in an increased success rate of CMA allocations. Signed-off-by: Rik van Riel --- mm/page_alloc.c | 47 +++-------------------------------------------- 1 file changed, 3 insertions(+), 44 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0fb3c1719625..75e84907d8c6 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8539,32 +8539,6 @@ static int __alloc_contig_pages(unsigned long start_pfn, gfp_mask); } -static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn, - unsigned long nr_pages) -{ - unsigned long i, end_pfn = start_pfn + nr_pages; - struct page *page; - - for (i = start_pfn; i < end_pfn; i++) { - page = pfn_to_online_page(i); - if (!page) - return false; - - if (page_zone(page) != z) - return false; - - if (PageReserved(page)) - return false; - - if (page_count(page) > 0) - return false; - - if (PageHuge(page)) - return false; - } - return true; -} - static bool zone_spans_last_pfn(const struct zone *zone, unsigned long start_pfn, unsigned long nr_pages) { @@ -8605,28 +8579,13 @@ struct page *alloc_contig_pages(unsigned long nr_pages, gfp_t gfp_mask, zonelist = node_zonelist(nid, gfp_mask); for_each_zone_zonelist_nodemask(zone, z, zonelist, gfp_zone(gfp_mask), nodemask) { - spin_lock_irqsave(&zone->lock, flags); - pfn = ALIGN(zone->zone_start_pfn, nr_pages); while (zone_spans_last_pfn(zone, pfn, nr_pages)) { - if (pfn_range_valid_contig(zone, pfn, nr_pages)) { - /* - * We release the zone lock here because - * alloc_contig_range() will also lock the zone - * at some point. If there's an allocation - * spinning on this lock, it may win the race - * and cause alloc_contig_range() to fail... - */ - spin_unlock_irqrestore(&zone->lock, flags); - ret = __alloc_contig_pages(pfn, nr_pages, - gfp_mask); - if (!ret) - return pfn_to_page(pfn); - spin_lock_irqsave(&zone->lock, flags); - } + ret = __alloc_contig_pages(pfn, nr_pages, gfp_mask); + if (!ret) + return pfn_to_page(pfn); pfn += nr_pages; } - spin_unlock_irqrestore(&zone->lock, flags); } return NULL; } -- 2.24.1