All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: linux-mm@kvack.org, Johannes Weiner <hannes@cmpxchg.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	David Rientjes <rientjes@google.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	linux-kernel@vger.kernel.org, kernel-team@fb.com,
	Vlastimil Babka <vbabka@suse.cz>
Subject: [PATCH v2 08/10] mm, compaction: finish whole pageblock to reduce fragmentation
Date: Fri, 10 Feb 2017 18:23:41 +0100	[thread overview]
Message-ID: <20170210172343.30283-9-vbabka@suse.cz> (raw)
In-Reply-To: <20170210172343.30283-1-vbabka@suse.cz>

The main goal of direct compaction is to form a high-order page for allocation,
but it should also help against long-term fragmentation when possible. Most
lower-than-pageblock-order compactions are for non-movable allocations, which
means that if we compact in a movable pageblock and terminate as soon as we
create the high-order page, it's unlikely that the fallback heuristics will
claim the whole block. Instead there might be a single unmovable page in a
pageblock full of movable pages, and the next unmovable allocation might pick
another pageblock and increase long-term fragmentation.

To help against such scenarios, this patch changes the termination criteria for
compaction so that the current pageblock is finished even though the high-order
page already exists. Note that it might be possible that the high-order page
formed elsewhere in the zone due to parallel activity, but this patch doesn't
try to detect that.

This is only done with sync compaction, because async compaction is limited to
pageblock of the same migratetype, where it cannot result in a migratetype
fallback. (Async compaction also eagerly skips order-aligned blocks where
isolation fails, which is against the goal of migrating away as much of the
pageblock as possible.)

As a result of this patch, long-term memory fragmentation should be reduced.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 mm/compaction.c | 35 +++++++++++++++++++++++++++++++++--
 mm/internal.h   |  1 +
 2 files changed, 34 insertions(+), 2 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 84ef44c3b1c9..cef77a5fffea 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1329,6 +1329,17 @@ static enum compact_result __compact_finished(struct zone *zone,
 	if (is_via_compact_memory(cc->order))
 		return COMPACT_CONTINUE;
 
+	if (cc->finishing_block) {
+		/*
+		 * We have finished the pageblock, but better check again that
+		 * we really succeeded.
+		 */
+		if (IS_ALIGNED(cc->migrate_pfn, pageblock_nr_pages))
+			cc->finishing_block = false;
+		else
+			return COMPACT_CONTINUE;
+	}
+
 	/* Direct compactor: Is a suitable page free? */
 	for (order = cc->order; order < MAX_ORDER; order++) {
 		struct free_area *area = &zone->free_area[order];
@@ -1349,8 +1360,28 @@ static enum compact_result __compact_finished(struct zone *zone,
 		 * other migratetype buddy lists.
 		 */
 		if (find_suitable_fallback(area, order, migratetype,
-						true, &can_steal) != -1)
-			return COMPACT_SUCCESS;
+						true, &can_steal) != -1) {
+
+			/* movable pages are OK in any pageblock */
+			if (migratetype == MIGRATE_MOVABLE)
+				return COMPACT_SUCCESS;
+
+			/*
+			 * We are stealing for a non-movable allocation. Make
+			 * sure we finish compacting the current pageblock
+			 * first so it is as free as possible and we won't
+			 * have to steal another one soon. This only applies
+			 * to sync compaction, as async compaction operates
+			 * on pageblocks of the same migratetype.
+			 */
+			if (cc->mode == MIGRATE_ASYNC ||
+				IS_ALIGNED(cc->migrate_pfn, pageblock_nr_pages)) {
+				return COMPACT_SUCCESS;
+			} else {
+				cc->finishing_block = true;
+				return COMPACT_CONTINUE;
+			}
+		}
 	}
 
 	return COMPACT_NO_SUITABLE_PAGE;
diff --git a/mm/internal.h b/mm/internal.h
index 888f33cc7641..cdb33c957906 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -188,6 +188,7 @@ struct compact_control {
 	bool direct_compaction;		/* False from kcompactd or /proc/... */
 	bool whole_zone;		/* Whole zone should/has been scanned */
 	bool contended;			/* Signal lock or sched contention */
+	bool finishing_block;		/* Finishing current pageblock */
 };
 
 unsigned long
-- 
2.11.0

WARNING: multiple messages have this Message-ID (diff)
From: Vlastimil Babka <vbabka@suse.cz>
To: linux-mm@kvack.org, Johannes Weiner <hannes@cmpxchg.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	David Rientjes <rientjes@google.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	linux-kernel@vger.kernel.org, kernel-team@fb.com,
	Vlastimil Babka <vbabka@suse.cz>
Subject: [PATCH v2 08/10] mm, compaction: finish whole pageblock to reduce fragmentation
Date: Fri, 10 Feb 2017 18:23:41 +0100	[thread overview]
Message-ID: <20170210172343.30283-9-vbabka@suse.cz> (raw)
In-Reply-To: <20170210172343.30283-1-vbabka@suse.cz>

The main goal of direct compaction is to form a high-order page for allocation,
but it should also help against long-term fragmentation when possible. Most
lower-than-pageblock-order compactions are for non-movable allocations, which
means that if we compact in a movable pageblock and terminate as soon as we
create the high-order page, it's unlikely that the fallback heuristics will
claim the whole block. Instead there might be a single unmovable page in a
pageblock full of movable pages, and the next unmovable allocation might pick
another pageblock and increase long-term fragmentation.

To help against such scenarios, this patch changes the termination criteria for
compaction so that the current pageblock is finished even though the high-order
page already exists. Note that it might be possible that the high-order page
formed elsewhere in the zone due to parallel activity, but this patch doesn't
try to detect that.

This is only done with sync compaction, because async compaction is limited to
pageblock of the same migratetype, where it cannot result in a migratetype
fallback. (Async compaction also eagerly skips order-aligned blocks where
isolation fails, which is against the goal of migrating away as much of the
pageblock as possible.)

As a result of this patch, long-term memory fragmentation should be reduced.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 mm/compaction.c | 35 +++++++++++++++++++++++++++++++++--
 mm/internal.h   |  1 +
 2 files changed, 34 insertions(+), 2 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 84ef44c3b1c9..cef77a5fffea 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1329,6 +1329,17 @@ static enum compact_result __compact_finished(struct zone *zone,
 	if (is_via_compact_memory(cc->order))
 		return COMPACT_CONTINUE;
 
+	if (cc->finishing_block) {
+		/*
+		 * We have finished the pageblock, but better check again that
+		 * we really succeeded.
+		 */
+		if (IS_ALIGNED(cc->migrate_pfn, pageblock_nr_pages))
+			cc->finishing_block = false;
+		else
+			return COMPACT_CONTINUE;
+	}
+
 	/* Direct compactor: Is a suitable page free? */
 	for (order = cc->order; order < MAX_ORDER; order++) {
 		struct free_area *area = &zone->free_area[order];
@@ -1349,8 +1360,28 @@ static enum compact_result __compact_finished(struct zone *zone,
 		 * other migratetype buddy lists.
 		 */
 		if (find_suitable_fallback(area, order, migratetype,
-						true, &can_steal) != -1)
-			return COMPACT_SUCCESS;
+						true, &can_steal) != -1) {
+
+			/* movable pages are OK in any pageblock */
+			if (migratetype == MIGRATE_MOVABLE)
+				return COMPACT_SUCCESS;
+
+			/*
+			 * We are stealing for a non-movable allocation. Make
+			 * sure we finish compacting the current pageblock
+			 * first so it is as free as possible and we won't
+			 * have to steal another one soon. This only applies
+			 * to sync compaction, as async compaction operates
+			 * on pageblocks of the same migratetype.
+			 */
+			if (cc->mode == MIGRATE_ASYNC ||
+				IS_ALIGNED(cc->migrate_pfn, pageblock_nr_pages)) {
+				return COMPACT_SUCCESS;
+			} else {
+				cc->finishing_block = true;
+				return COMPACT_CONTINUE;
+			}
+		}
 	}
 
 	return COMPACT_NO_SUITABLE_PAGE;
diff --git a/mm/internal.h b/mm/internal.h
index 888f33cc7641..cdb33c957906 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -188,6 +188,7 @@ struct compact_control {
 	bool direct_compaction;		/* False from kcompactd or /proc/... */
 	bool whole_zone;		/* Whole zone should/has been scanned */
 	bool contended;			/* Signal lock or sched contention */
+	bool finishing_block;		/* Finishing current pageblock */
 };
 
 unsigned long
-- 
2.11.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2017-02-10 18:16 UTC|newest]

Thread overview: 92+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-02-10 17:23 [PATCH v2 00/10] try to reduce fragmenting fallbacks Vlastimil Babka
2017-02-10 17:23 ` Vlastimil Babka
2017-02-10 17:23 ` [PATCH v2 01/10] mm, compaction: reorder fields in struct compact_control Vlastimil Babka
2017-02-10 17:23   ` Vlastimil Babka
2017-02-13 10:49   ` Mel Gorman
2017-02-13 10:49     ` Mel Gorman
2017-02-14 16:33   ` Johannes Weiner
2017-02-14 16:33     ` Johannes Weiner
2017-02-10 17:23 ` [PATCH v2 02/10] mm, compaction: remove redundant watermark check in compact_finished() Vlastimil Babka
2017-02-10 17:23   ` Vlastimil Babka
2017-02-13 10:49   ` Mel Gorman
2017-02-13 10:49     ` Mel Gorman
2017-02-14 16:34   ` Johannes Weiner
2017-02-14 16:34     ` Johannes Weiner
2017-02-10 17:23 ` [PATCH v2 03/10] mm, page_alloc: split smallest stolen page in fallback Vlastimil Babka
2017-02-10 17:23   ` Vlastimil Babka
2017-02-13 10:51   ` Mel Gorman
2017-02-13 10:51     ` Mel Gorman
2017-02-13 10:54     ` Vlastimil Babka
2017-02-13 10:54       ` Vlastimil Babka
2017-02-14 16:59   ` Johannes Weiner
2017-02-14 16:59     ` Johannes Weiner
2017-02-10 17:23 ` [PATCH v2 04/10] mm, page_alloc: count movable pages when stealing from pageblock Vlastimil Babka
2017-02-10 17:23   ` Vlastimil Babka
2017-02-13 10:53   ` Mel Gorman
2017-02-13 10:53     ` Mel Gorman
2017-02-14 10:07   ` Xishi Qiu
2017-02-14 10:07     ` Xishi Qiu
2017-02-15 10:47     ` Vlastimil Babka
2017-02-15 10:47       ` Vlastimil Babka
2017-02-15 11:56       ` Xishi Qiu
2017-02-15 11:56         ` Xishi Qiu
2017-02-17 16:21         ` Vlastimil Babka
2017-02-17 16:21           ` Vlastimil Babka
2017-02-14 18:10   ` Johannes Weiner
2017-02-14 18:10     ` Johannes Weiner
2017-02-17 16:09     ` Vlastimil Babka
2017-02-17 16:09       ` Vlastimil Babka
2017-02-10 17:23 ` [PATCH v2 05/10] mm, compaction: change migrate_async_suitable() to suitable_migration_source() Vlastimil Babka
2017-02-10 17:23   ` Vlastimil Babka
2017-02-13 10:53   ` Mel Gorman
2017-02-13 10:53     ` Mel Gorman
2017-02-14 18:12   ` Johannes Weiner
2017-02-14 18:12     ` Johannes Weiner
2017-02-10 17:23 ` [PATCH v2 06/10] mm, compaction: add migratetype to compact_control Vlastimil Babka
2017-02-10 17:23   ` Vlastimil Babka
2017-02-13 10:53   ` Mel Gorman
2017-02-13 10:53     ` Mel Gorman
2017-02-14 18:15   ` Johannes Weiner
2017-02-14 18:15     ` Johannes Weiner
2017-02-10 17:23 ` [PATCH v2 07/10] mm, compaction: restrict async compaction to pageblocks of same migratetype Vlastimil Babka
2017-02-10 17:23   ` Vlastimil Babka
2017-02-13 10:56   ` Mel Gorman
2017-02-13 10:56     ` Mel Gorman
2017-02-14 20:10   ` Johannes Weiner
2017-02-14 20:10     ` Johannes Weiner
2017-02-17 16:32     ` Vlastimil Babka
2017-02-17 16:32       ` Vlastimil Babka
2017-02-17 17:39       ` Johannes Weiner
2017-02-17 17:39         ` Johannes Weiner
2017-02-10 17:23 ` Vlastimil Babka [this message]
2017-02-10 17:23   ` [PATCH v2 08/10] mm, compaction: finish whole pageblock to reduce fragmentation Vlastimil Babka
2017-02-13 10:57   ` Mel Gorman
2017-02-13 10:57     ` Mel Gorman
2017-02-16 11:44   ` Johannes Weiner
2017-02-16 11:44     ` Johannes Weiner
2017-02-10 17:23 ` [RFC v2 09/10] mm, page_alloc: disallow migratetype fallback in fastpath Vlastimil Babka
2017-02-10 17:23   ` Vlastimil Babka
2017-02-10 17:23 ` [RFC v2 10/10] mm, page_alloc: introduce MIGRATE_MIXED migratetype Vlastimil Babka
2017-02-10 17:23   ` Vlastimil Babka
2017-03-08  2:16   ` Yisheng Xie
2017-03-08  2:16     ` Yisheng Xie
2017-03-08  7:07     ` Vlastimil Babka
2017-03-08  7:07       ` Vlastimil Babka
2017-03-13  2:16       ` Yisheng Xie
2017-03-13  2:16         ` Yisheng Xie
2017-02-13 11:07 ` [PATCH v2 00/10] try to reduce fragmenting fallbacks Mel Gorman
2017-02-13 11:07   ` Mel Gorman
2017-02-15 14:29   ` Vlastimil Babka
2017-02-15 14:29     ` Vlastimil Babka
2017-02-15 16:11     ` Vlastimil Babka
2017-02-15 16:11       ` Vlastimil Babka
2017-02-15 20:11       ` Vlastimil Babka
2017-02-15 20:11         ` Vlastimil Babka
2017-02-16 15:12     ` Vlastimil Babka
2017-02-16 15:12       ` Vlastimil Babka
2017-02-17 15:24       ` Vlastimil Babka
2017-02-17 15:24         ` Vlastimil Babka
2017-02-20 12:30   ` Vlastimil Babka
2017-02-20 12:30     ` Vlastimil Babka
2017-02-23 16:01     ` Mel Gorman
2017-02-23 16:01       ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170210172343.30283-9-vbabka@suse.cz \
    --to=vbabka@suse.cz \
    --cc=hannes@cmpxchg.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.