From: Vlastimil Babka <vbabka@suse.cz> To: David Rientjes <rientjes@google.com>, linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>, Greg Thelen <gthelen@google.com>, Vlastimil Babka <vbabka@suse.cz>, Minchan Kim <minchan@kernel.org>, Mel Gorman <mgorman@suse.de>, Joonsoo Kim <iamjoonsoo.kim@lge.com>, Michal Nazarewicz <mina86@mina86.com>, Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>, Christoph Lameter <cl@linux.com>, Rik van Riel <riel@redhat.com> Subject: [PATCH 05/10] mm, compaction: remember position within pageblock in free pages scanner Date: Mon, 9 Jun 2014 11:26:17 +0200 [thread overview] Message-ID: <1402305982-6928-5-git-send-email-vbabka@suse.cz> (raw) In-Reply-To: <1402305982-6928-1-git-send-email-vbabka@suse.cz> Unlike the migration scanner, the free scanner remembers the beginning of the last scanned pageblock in cc->free_pfn. It might be therefore rescanning pages uselessly when called several times during single compaction. This might have been useful when pages were returned to the buddy allocator after a failed migration, but this is no longer the case. This patch changes the meaning of cc->free_pfn so that if it points to a middle of a pageblock, that pageblock is scanned only from cc->free_pfn to the end. isolate_freepages_block() will record the pfn of the last page it looked at, which is then used to update cc->free_pfn. In the mmtests stress-highalloc benchmark, this has resulted in lowering the ratio between pages scanned by both scanners, from 2.5 free pages per migrate page, to 2.25 free pages per migrate page, without affecting success rates. Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Cc: Minchan Kim <minchan@kernel.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Nazarewicz <mina86@mina86.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Christoph Lameter <cl@linux.com> Cc: Rik van Riel <riel@redhat.com> Cc: David Rientjes <rientjes@google.com> --- mm/compaction.c | 33 ++++++++++++++++++++++++++++----- 1 file changed, 28 insertions(+), 5 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 83f72bd..58dfaaa 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -297,7 +297,7 @@ static bool suitable_migration_target(struct page *page) * (even though it may still end up isolating some pages). */ static unsigned long isolate_freepages_block(struct compact_control *cc, - unsigned long blockpfn, + unsigned long *start_pfn, unsigned long end_pfn, struct list_head *freelist, bool strict) @@ -306,6 +306,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, struct page *cursor, *valid_page = NULL; unsigned long flags; bool locked = false; + unsigned long blockpfn = *start_pfn; cursor = pfn_to_page(blockpfn); @@ -314,6 +315,9 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, int isolated, i; struct page *page = cursor; + /* Record how far we have got within the block */ + *start_pfn = blockpfn; + /* * Periodically drop the lock (if held) regardless of its * contention, to give chance to IRQs. Abort async compaction @@ -424,6 +428,9 @@ isolate_freepages_range(struct compact_control *cc, LIST_HEAD(freelist); for (pfn = start_pfn; pfn < end_pfn; pfn += isolated) { + /* Protect pfn from changing by isolate_freepages_block */ + unsigned long isolate_start_pfn = pfn; + if (!pfn_valid(pfn) || cc->zone != page_zone(pfn_to_page(pfn))) break; @@ -434,8 +441,8 @@ isolate_freepages_range(struct compact_control *cc, block_end_pfn = ALIGN(pfn + 1, pageblock_nr_pages); block_end_pfn = min(block_end_pfn, end_pfn); - isolated = isolate_freepages_block(cc, pfn, block_end_pfn, - &freelist, true); + isolated = isolate_freepages_block(cc, &isolate_start_pfn, + block_end_pfn, &freelist, true); /* * In strict mode, isolate_freepages_block() returns 0 if @@ -774,6 +781,7 @@ static void isolate_freepages(struct zone *zone, block_end_pfn = block_start_pfn, block_start_pfn -= pageblock_nr_pages) { unsigned long isolated; + unsigned long isolate_start_pfn; /* * This can iterate a massively long zone without finding any @@ -807,12 +815,27 @@ static void isolate_freepages(struct zone *zone, continue; /* Found a block suitable for isolating free pages from */ - cc->free_pfn = block_start_pfn; - isolated = isolate_freepages_block(cc, block_start_pfn, + isolate_start_pfn = block_start_pfn; + + /* + * If we are restarting the free scanner in this block, do not + * rescan the beginning of the block + */ + if (cc->free_pfn < block_end_pfn) + isolate_start_pfn = cc->free_pfn; + + isolated = isolate_freepages_block(cc, &isolate_start_pfn, block_end_pfn, freelist, false); nr_freepages += isolated; /* + * Remember where the free scanner should restart next time. + * This will point to the last page of pageblock we just + * scanned, if we scanned it fully. + */ + cc->free_pfn = isolate_start_pfn; + + /* * Set a flag that we successfully isolated in this pageblock. * In the next loop iteration, zone->compact_cached_free_pfn * will not be updated and thus it will effectively contain the -- 1.8.4.5
WARNING: multiple messages have this Message-ID (diff)
From: Vlastimil Babka <vbabka@suse.cz> To: David Rientjes <rientjes@google.com>, linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>, Greg Thelen <gthelen@google.com>, Vlastimil Babka <vbabka@suse.cz>, Minchan Kim <minchan@kernel.org>, Mel Gorman <mgorman@suse.de>, Joonsoo Kim <iamjoonsoo.kim@lge.com>, Michal Nazarewicz <mina86@mina86.com>, Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>, Christoph Lameter <cl@linux.com>, Rik van Riel <riel@redhat.com> Subject: [PATCH 05/10] mm, compaction: remember position within pageblock in free pages scanner Date: Mon, 9 Jun 2014 11:26:17 +0200 [thread overview] Message-ID: <1402305982-6928-5-git-send-email-vbabka@suse.cz> (raw) In-Reply-To: <1402305982-6928-1-git-send-email-vbabka@suse.cz> Unlike the migration scanner, the free scanner remembers the beginning of the last scanned pageblock in cc->free_pfn. It might be therefore rescanning pages uselessly when called several times during single compaction. This might have been useful when pages were returned to the buddy allocator after a failed migration, but this is no longer the case. This patch changes the meaning of cc->free_pfn so that if it points to a middle of a pageblock, that pageblock is scanned only from cc->free_pfn to the end. isolate_freepages_block() will record the pfn of the last page it looked at, which is then used to update cc->free_pfn. In the mmtests stress-highalloc benchmark, this has resulted in lowering the ratio between pages scanned by both scanners, from 2.5 free pages per migrate page, to 2.25 free pages per migrate page, without affecting success rates. Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Cc: Minchan Kim <minchan@kernel.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Nazarewicz <mina86@mina86.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Christoph Lameter <cl@linux.com> Cc: Rik van Riel <riel@redhat.com> Cc: David Rientjes <rientjes@google.com> --- mm/compaction.c | 33 ++++++++++++++++++++++++++++----- 1 file changed, 28 insertions(+), 5 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 83f72bd..58dfaaa 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -297,7 +297,7 @@ static bool suitable_migration_target(struct page *page) * (even though it may still end up isolating some pages). */ static unsigned long isolate_freepages_block(struct compact_control *cc, - unsigned long blockpfn, + unsigned long *start_pfn, unsigned long end_pfn, struct list_head *freelist, bool strict) @@ -306,6 +306,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, struct page *cursor, *valid_page = NULL; unsigned long flags; bool locked = false; + unsigned long blockpfn = *start_pfn; cursor = pfn_to_page(blockpfn); @@ -314,6 +315,9 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, int isolated, i; struct page *page = cursor; + /* Record how far we have got within the block */ + *start_pfn = blockpfn; + /* * Periodically drop the lock (if held) regardless of its * contention, to give chance to IRQs. Abort async compaction @@ -424,6 +428,9 @@ isolate_freepages_range(struct compact_control *cc, LIST_HEAD(freelist); for (pfn = start_pfn; pfn < end_pfn; pfn += isolated) { + /* Protect pfn from changing by isolate_freepages_block */ + unsigned long isolate_start_pfn = pfn; + if (!pfn_valid(pfn) || cc->zone != page_zone(pfn_to_page(pfn))) break; @@ -434,8 +441,8 @@ isolate_freepages_range(struct compact_control *cc, block_end_pfn = ALIGN(pfn + 1, pageblock_nr_pages); block_end_pfn = min(block_end_pfn, end_pfn); - isolated = isolate_freepages_block(cc, pfn, block_end_pfn, - &freelist, true); + isolated = isolate_freepages_block(cc, &isolate_start_pfn, + block_end_pfn, &freelist, true); /* * In strict mode, isolate_freepages_block() returns 0 if @@ -774,6 +781,7 @@ static void isolate_freepages(struct zone *zone, block_end_pfn = block_start_pfn, block_start_pfn -= pageblock_nr_pages) { unsigned long isolated; + unsigned long isolate_start_pfn; /* * This can iterate a massively long zone without finding any @@ -807,12 +815,27 @@ static void isolate_freepages(struct zone *zone, continue; /* Found a block suitable for isolating free pages from */ - cc->free_pfn = block_start_pfn; - isolated = isolate_freepages_block(cc, block_start_pfn, + isolate_start_pfn = block_start_pfn; + + /* + * If we are restarting the free scanner in this block, do not + * rescan the beginning of the block + */ + if (cc->free_pfn < block_end_pfn) + isolate_start_pfn = cc->free_pfn; + + isolated = isolate_freepages_block(cc, &isolate_start_pfn, block_end_pfn, freelist, false); nr_freepages += isolated; /* + * Remember where the free scanner should restart next time. + * This will point to the last page of pageblock we just + * scanned, if we scanned it fully. + */ + cc->free_pfn = isolate_start_pfn; + + /* * Set a flag that we successfully isolated in this pageblock. * In the next loop iteration, zone->compact_cached_free_pfn * will not be updated and thus it will effectively contain the -- 1.8.4.5 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-06-09 9:27 UTC|newest] Thread overview: 88+ messages / expand[flat|nested] mbox.gz Atom feed top 2014-06-09 9:26 [PATCH 01/10] mm, compaction: do not recheck suitable_migration_target under lock Vlastimil Babka 2014-06-09 9:26 ` Vlastimil Babka 2014-06-09 9:26 ` [PATCH 02/10] mm, compaction: report compaction as contended only due to lock contention Vlastimil Babka 2014-06-09 9:26 ` Vlastimil Babka 2014-06-09 23:50 ` David Rientjes 2014-06-09 23:50 ` David Rientjes 2014-06-10 7:11 ` Vlastimil Babka 2014-06-10 7:11 ` Vlastimil Babka 2014-06-10 23:40 ` David Rientjes 2014-06-10 23:40 ` David Rientjes 2014-06-11 1:10 ` Minchan Kim 2014-06-11 1:10 ` Minchan Kim 2014-06-11 12:22 ` Vlastimil Babka 2014-06-11 12:22 ` Vlastimil Babka 2014-06-11 23:49 ` Minchan Kim 2014-06-11 23:49 ` Minchan Kim 2014-06-12 14:02 ` Vlastimil Babka 2014-06-12 14:02 ` Vlastimil Babka 2014-06-13 2:40 ` Minchan Kim 2014-06-13 2:40 ` Minchan Kim 2014-06-20 11:47 ` Vlastimil Babka 2014-06-20 11:47 ` Vlastimil Babka 2014-06-09 9:26 ` [PATCH 03/10] mm, compaction: periodically drop lock and restore IRQs in scanners Vlastimil Babka 2014-06-09 9:26 ` Vlastimil Babka 2014-06-09 23:58 ` David Rientjes 2014-06-09 23:58 ` David Rientjes 2014-06-10 7:15 ` Vlastimil Babka 2014-06-10 7:15 ` Vlastimil Babka 2014-06-10 23:41 ` David Rientjes 2014-06-10 23:41 ` David Rientjes 2014-06-11 1:32 ` Minchan Kim 2014-06-11 1:32 ` Minchan Kim 2014-06-11 11:24 ` Vlastimil Babka 2014-06-11 11:24 ` Vlastimil Babka 2014-06-09 9:26 ` [PATCH 04/10] mm, compaction: skip rechecks when lock was already held Vlastimil Babka 2014-06-09 9:26 ` Vlastimil Babka 2014-06-10 0:00 ` David Rientjes 2014-06-10 0:00 ` David Rientjes 2014-06-11 1:50 ` Minchan Kim 2014-06-11 1:50 ` Minchan Kim 2014-06-09 9:26 ` Vlastimil Babka [this message] 2014-06-09 9:26 ` [PATCH 05/10] mm, compaction: remember position within pageblock in free pages scanner Vlastimil Babka 2014-06-10 0:07 ` David Rientjes 2014-06-10 0:07 ` David Rientjes 2014-06-11 2:12 ` Minchan Kim 2014-06-11 2:12 ` Minchan Kim 2014-06-11 8:16 ` Joonsoo Kim 2014-06-11 8:16 ` Joonsoo Kim 2014-06-11 11:41 ` Vlastimil Babka 2014-06-11 11:41 ` Vlastimil Babka 2014-06-11 11:33 ` Vlastimil Babka 2014-06-11 11:33 ` Vlastimil Babka 2014-06-11 3:29 ` Zhang Yanfei 2014-06-11 3:29 ` Zhang Yanfei 2014-06-09 9:26 ` [PATCH 06/10] mm, compaction: skip buddy pages by their order in the migrate scanner Vlastimil Babka 2014-06-09 9:26 ` Vlastimil Babka 2014-06-10 0:08 ` David Rientjes 2014-06-10 0:08 ` David Rientjes 2014-06-09 9:26 ` [PATCH 07/10] mm: rename allocflags_to_migratetype for clarity Vlastimil Babka 2014-06-09 9:26 ` Vlastimil Babka 2014-06-11 2:41 ` Minchan Kim 2014-06-11 2:41 ` Minchan Kim 2014-06-11 3:38 ` Zhang Yanfei 2014-06-11 3:38 ` Zhang Yanfei 2014-06-09 9:26 ` [PATCH 08/10] mm, compaction: pass gfp mask to compact_control Vlastimil Babka 2014-06-09 9:26 ` Vlastimil Babka 2014-06-11 2:48 ` Minchan Kim 2014-06-11 2:48 ` Minchan Kim 2014-06-11 11:46 ` Vlastimil Babka 2014-06-11 11:46 ` Vlastimil Babka 2014-06-12 0:24 ` David Rientjes 2014-06-12 0:24 ` David Rientjes 2014-06-09 9:26 ` [RFC PATCH 09/10] mm, compaction: try to capture the just-created high-order freepage Vlastimil Babka 2014-06-09 9:26 ` Vlastimil Babka 2014-06-11 14:56 ` Vlastimil Babka 2014-06-11 14:56 ` Vlastimil Babka 2014-06-12 2:20 ` Minchan Kim 2014-06-12 2:20 ` Minchan Kim 2014-06-12 8:21 ` Vlastimil Babka 2014-06-12 8:21 ` Vlastimil Babka 2014-06-09 9:26 ` [RFC PATCH 10/10] mm, compaction: do not migrate pages when that cannot satisfy page fault allocation Vlastimil Babka 2014-06-09 9:26 ` Vlastimil Babka 2014-06-09 23:41 ` [PATCH 01/10] mm, compaction: do not recheck suitable_migration_target under lock David Rientjes 2014-06-09 23:41 ` David Rientjes 2014-06-11 0:33 ` Minchan Kim 2014-06-11 0:33 ` Minchan Kim 2014-06-11 2:45 ` Zhang Yanfei 2014-06-11 2:45 ` Zhang Yanfei
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1402305982-6928-5-git-send-email-vbabka@suse.cz \ --to=vbabka@suse.cz \ --cc=akpm@linux-foundation.org \ --cc=cl@linux.com \ --cc=gthelen@google.com \ --cc=iamjoonsoo.kim@lge.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@suse.de \ --cc=mina86@mina86.com \ --cc=minchan@kernel.org \ --cc=n-horiguchi@ah.jp.nec.com \ --cc=riel@redhat.com \ --cc=rientjes@google.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.