From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755448AbbHZPi2 (ORCPT ); Wed, 26 Aug 2015 11:38:28 -0400 Received: from outbound-smtp04.blacknight.com ([81.17.249.35]:50360 "EHLO outbound-smtp04.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752842AbbHZPi1 (ORCPT ); Wed, 26 Aug 2015 11:38:27 -0400 Date: Wed, 26 Aug 2015 16:38:18 +0100 From: Mel Gorman To: Michal Hocko Cc: Andrew Morton , Johannes Weiner , Rik van Riel , Vlastimil Babka , David Rientjes , Joonsoo Kim , Linux-MM , LKML Subject: Re: [PATCH 11/12] mm, page_alloc: Reserve pageblocks for high-order atomic allocations on demand Message-ID: <20150826153818.GQ12432@techsingularity.net> References: <1440418191-10894-1-git-send-email-mgorman@techsingularity.net> <20150824122957.GI12432@techsingularity.net> <20150826145352.GJ25196@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20150826145352.GJ25196@dhcp22.suse.cz> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Aug 26, 2015 at 04:53:52PM +0200, Michal Hocko wrote: > > > > Overall, this is a small reduction but the reserves are small relative to the > > number of allocation requests. In early versions of the patch, the failure > > rate reduced by a much larger amount but that required much larger reserves > > and perversely made atomic allocations seem more reliable than regular allocations. > > Have you considered a counter for vmstat/zoneinfo so that we have an overview > about the memory consumed for this reserve? > It should already be available in /proc/pagetypeinfo > > Signed-off-by: Mel Gorman > > Acked-by: Michal Hocko > > [...] > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index d5ce050ebe4f..2415f882b89c 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > [...] > > @@ -1645,10 +1725,16 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype) > > * Call me with the zone->lock already held. > > */ > > static struct page *__rmqueue(struct zone *zone, unsigned int order, > > - int migratetype) > > + int migratetype, gfp_t gfp_flags) > > { > > struct page *page; > > > > + if (unlikely(order && (gfp_flags & __GFP_ATOMIC))) { > > + page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC); > > + if (page) > > + goto out; > > I guess you want to change migratetype to MIGRATE_HIGHATOMIC in the > successful case so the tracepoint reports this properly. > Yes, thanks. -- Mel Gorman SUSE Labs