From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755632AbaFKIMQ (ORCPT ); Wed, 11 Jun 2014 04:12:16 -0400 Received: from lgeamrelo02.lge.com ([156.147.1.126]:55220 "EHLO lgeamrelo02.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752718AbaFKIMN (ORCPT ); Wed, 11 Jun 2014 04:12:13 -0400 X-Original-SENDERIP: 10.177.220.145 X-Original-MAILFROM: iamjoonsoo.kim@lge.com Date: Wed, 11 Jun 2014 17:16:06 +0900 From: Joonsoo Kim To: Minchan Kim Cc: Vlastimil Babka , David Rientjes , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Greg Thelen , Mel Gorman , Michal Nazarewicz , Naoya Horiguchi , Christoph Lameter , Rik van Riel Subject: Re: [PATCH 05/10] mm, compaction: remember position within pageblock in free pages scanner Message-ID: <20140611081606.GB28258@js1304-P5Q-DELUXE> References: <1402305982-6928-1-git-send-email-vbabka@suse.cz> <1402305982-6928-5-git-send-email-vbabka@suse.cz> <20140611021213.GF15630@bbox> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20140611021213.GF15630@bbox> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jun 11, 2014 at 11:12:13AM +0900, Minchan Kim wrote: > On Mon, Jun 09, 2014 at 11:26:17AM +0200, Vlastimil Babka wrote: > > Unlike the migration scanner, the free scanner remembers the beginning of the > > last scanned pageblock in cc->free_pfn. It might be therefore rescanning pages > > uselessly when called several times during single compaction. This might have > > been useful when pages were returned to the buddy allocator after a failed > > migration, but this is no longer the case. > > > > This patch changes the meaning of cc->free_pfn so that if it points to a > > middle of a pageblock, that pageblock is scanned only from cc->free_pfn to the > > end. isolate_freepages_block() will record the pfn of the last page it looked > > at, which is then used to update cc->free_pfn. > > > > In the mmtests stress-highalloc benchmark, this has resulted in lowering the > > ratio between pages scanned by both scanners, from 2.5 free pages per migrate > > page, to 2.25 free pages per migrate page, without affecting success rates. > > > > Signed-off-by: Vlastimil Babka > Reviewed-by: Minchan Kim > > Below is a nitpick. > > > Cc: Minchan Kim > > Cc: Mel Gorman > > Cc: Joonsoo Kim > > Cc: Michal Nazarewicz > > Cc: Naoya Horiguchi > > Cc: Christoph Lameter > > Cc: Rik van Riel > > Cc: David Rientjes > > --- > > mm/compaction.c | 33 ++++++++++++++++++++++++++++----- > > 1 file changed, 28 insertions(+), 5 deletions(-) > > > > diff --git a/mm/compaction.c b/mm/compaction.c > > index 83f72bd..58dfaaa 100644 > > --- a/mm/compaction.c > > +++ b/mm/compaction.c > > @@ -297,7 +297,7 @@ static bool suitable_migration_target(struct page *page) > > * (even though it may still end up isolating some pages). > > */ > > static unsigned long isolate_freepages_block(struct compact_control *cc, > > - unsigned long blockpfn, > > + unsigned long *start_pfn, > > unsigned long end_pfn, > > struct list_head *freelist, > > bool strict) > > @@ -306,6 +306,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, > > struct page *cursor, *valid_page = NULL; > > unsigned long flags; > > bool locked = false; > > + unsigned long blockpfn = *start_pfn; > > > > cursor = pfn_to_page(blockpfn); > > > > @@ -314,6 +315,9 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, > > int isolated, i; > > struct page *page = cursor; > > > > + /* Record how far we have got within the block */ > > + *start_pfn = blockpfn; > > + > > Couldn't we move this out of the loop for just one store? Hello, Vlastimil. Moreover, start_pfn can't be updated to end pfn with this approach. Is it okay? Thanks. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pb0-f49.google.com (mail-pb0-f49.google.com [209.85.160.49]) by kanga.kvack.org (Postfix) with ESMTP id A14986B0143 for ; Wed, 11 Jun 2014 04:12:14 -0400 (EDT) Received: by mail-pb0-f49.google.com with SMTP id jt11so7143143pbb.22 for ; Wed, 11 Jun 2014 01:12:14 -0700 (PDT) Received: from lgeamrelo02.lge.com (lgeamrelo02.lge.com. [156.147.1.126]) by mx.google.com with ESMTP id bd8si37111881pbd.241.2014.06.11.01.12.12 for ; Wed, 11 Jun 2014 01:12:13 -0700 (PDT) Date: Wed, 11 Jun 2014 17:16:06 +0900 From: Joonsoo Kim Subject: Re: [PATCH 05/10] mm, compaction: remember position within pageblock in free pages scanner Message-ID: <20140611081606.GB28258@js1304-P5Q-DELUXE> References: <1402305982-6928-1-git-send-email-vbabka@suse.cz> <1402305982-6928-5-git-send-email-vbabka@suse.cz> <20140611021213.GF15630@bbox> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20140611021213.GF15630@bbox> Sender: owner-linux-mm@kvack.org List-ID: To: Minchan Kim Cc: Vlastimil Babka , David Rientjes , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Greg Thelen , Mel Gorman , Michal Nazarewicz , Naoya Horiguchi , Christoph Lameter , Rik van Riel On Wed, Jun 11, 2014 at 11:12:13AM +0900, Minchan Kim wrote: > On Mon, Jun 09, 2014 at 11:26:17AM +0200, Vlastimil Babka wrote: > > Unlike the migration scanner, the free scanner remembers the beginning of the > > last scanned pageblock in cc->free_pfn. It might be therefore rescanning pages > > uselessly when called several times during single compaction. This might have > > been useful when pages were returned to the buddy allocator after a failed > > migration, but this is no longer the case. > > > > This patch changes the meaning of cc->free_pfn so that if it points to a > > middle of a pageblock, that pageblock is scanned only from cc->free_pfn to the > > end. isolate_freepages_block() will record the pfn of the last page it looked > > at, which is then used to update cc->free_pfn. > > > > In the mmtests stress-highalloc benchmark, this has resulted in lowering the > > ratio between pages scanned by both scanners, from 2.5 free pages per migrate > > page, to 2.25 free pages per migrate page, without affecting success rates. > > > > Signed-off-by: Vlastimil Babka > Reviewed-by: Minchan Kim > > Below is a nitpick. > > > Cc: Minchan Kim > > Cc: Mel Gorman > > Cc: Joonsoo Kim > > Cc: Michal Nazarewicz > > Cc: Naoya Horiguchi > > Cc: Christoph Lameter > > Cc: Rik van Riel > > Cc: David Rientjes > > --- > > mm/compaction.c | 33 ++++++++++++++++++++++++++++----- > > 1 file changed, 28 insertions(+), 5 deletions(-) > > > > diff --git a/mm/compaction.c b/mm/compaction.c > > index 83f72bd..58dfaaa 100644 > > --- a/mm/compaction.c > > +++ b/mm/compaction.c > > @@ -297,7 +297,7 @@ static bool suitable_migration_target(struct page *page) > > * (even though it may still end up isolating some pages). > > */ > > static unsigned long isolate_freepages_block(struct compact_control *cc, > > - unsigned long blockpfn, > > + unsigned long *start_pfn, > > unsigned long end_pfn, > > struct list_head *freelist, > > bool strict) > > @@ -306,6 +306,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, > > struct page *cursor, *valid_page = NULL; > > unsigned long flags; > > bool locked = false; > > + unsigned long blockpfn = *start_pfn; > > > > cursor = pfn_to_page(blockpfn); > > > > @@ -314,6 +315,9 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, > > int isolated, i; > > struct page *page = cursor; > > > > + /* Record how far we have got within the block */ > > + *start_pfn = blockpfn; > > + > > Couldn't we move this out of the loop for just one store? Hello, Vlastimil. Moreover, start_pfn can't be updated to end pfn with this approach. Is it okay? Thanks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org