From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753798AbbHZOx2 (ORCPT ); Wed, 26 Aug 2015 10:53:28 -0400 Received: from outbound-smtp03.blacknight.com ([81.17.249.16]:55662 "EHLO outbound-smtp03.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751525AbbHZOx1 (ORCPT ); Wed, 26 Aug 2015 10:53:27 -0400 Date: Wed, 26 Aug 2015 15:53:18 +0100 From: Mel Gorman To: Vlastimil Babka Cc: Andrew Morton , Johannes Weiner , Rik van Riel , David Rientjes , Joonsoo Kim , Michal Hocko , Linux-MM , LKML Subject: Re: [PATCH 12/12] mm, page_alloc: Only enforce watermarks for order-0 allocations Message-ID: <20150826145318.GP12432@techsingularity.net> References: <1440418191-10894-1-git-send-email-mgorman@techsingularity.net> <20150824123015.GJ12432@techsingularity.net> <55DDC23F.8020004@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <55DDC23F.8020004@suse.cz> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Aug 26, 2015 at 03:42:23PM +0200, Vlastimil Babka wrote: > >@@ -2309,22 +2311,30 @@ static bool __zone_watermark_ok(struct zone *z, unsigned int order, > > #ifdef CONFIG_CMA > > /* If allocation can't use CMA areas don't use free CMA pages */ > > if (!(alloc_flags & ALLOC_CMA)) > >- free_cma = zone_page_state(z, NR_FREE_CMA_PAGES); > >+ free_pages -= zone_page_state(z, NR_FREE_CMA_PAGES); > > #endif > > > >- if (free_pages - free_cma <= min + z->lowmem_reserve[classzone_idx]) > >+ if (free_pages <= min + z->lowmem_reserve[classzone_idx]) > > return false; > >- for (o = 0; o < order; o++) { > >- /* At the next order, this order's pages become unavailable */ > >- free_pages -= z->free_area[o].nr_free << o; > > > >- /* Require fewer higher order pages to be free */ > >- min >>= 1; > >+ /* order-0 watermarks are ok */ > >+ if (!order) > >+ return true; > >+ > >+ /* Check at least one high-order page is free */ > >+ for (o = order; o < MAX_ORDER; o++) { > >+ struct free_area *area = &z->free_area[o]; > >+ int mt; > >+ > >+ if (atomic && area->nr_free) > >+ return true; > > > >- if (free_pages <= min) > >- return false; > >+ for (mt = 0; mt < MIGRATE_PCPTYPES; mt++) { > >+ if (!list_empty(&area->free_list[mt])) > >+ return true; > >+ } > > I think we really need something like this here: > > #ifdef CONFIG_CMA > if (alloc_flags & ALLOC_CMA)) && > !list_empty(&area->free_list[MIGRATE_CMA]) > return true; > #endif > > This is not about CMA and high-order atomic allocations being used at the > same time. This is about high-order MIGRATE_MOVABLE allocations (that set > ALLOC_CMA) failing to use MIGRATE_CMA pageblocks, which they should be > allowed to use. It's complementary to the existing free_pages adjustment > above. > > Maybe there's not many high-order MIGRATE_MOVABLE allocations today, but > they might increase with the driver migration framework. So why set up us a > bomb. > Ok, that seems sensible. Will apply this hunk on top diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1a4169be1498..10f25bf18665 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2337,6 +2337,13 @@ static bool __zone_watermark_ok(struct zone *z, unsigned int order, if (!list_empty(&area->free_list[mt])) return true; } + +#ifdef CONFIG_CMA + if ((alloc_flags & ALLOC_CMA) && + !list_empty(&area->free_list[MIGRATE_CMA])) { + return true; + } +#endif } return false; }