linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v3 1/3] mm/cma: change fallback behaviour for CMA freepage
@ 2015-02-02  7:15 Joonsoo Kim
  2015-02-02  7:15 ` [RFC PATCH v3 2/3] mm/page_alloc: factor out fallback freepage checking Joonsoo Kim
                   ` (3 more replies)
  0 siblings, 4 replies; 15+ messages in thread
From: Joonsoo Kim @ 2015-02-02  7:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, David Rientjes, Rik van Riel, linux-mm, linux-kernel,
	Zhang Yanfei, Joonsoo Kim

freepage with MIGRATE_CMA can be used only for MIGRATE_MOVABLE and
they should not be expanded to other migratetype buddy list
to protect them from unmovable/reclaimable allocation. Implementing
these requirements in __rmqueue_fallback(), that is, finding largest
possible block of freepage has bad effect that high order freepage
with MIGRATE_CMA are broken continually although there are suitable
order CMA freepage. Reason is that they are not be expanded to other
migratetype buddy list and next __rmqueue_fallback() invocation try to
finds another largest block of freepage and break it again. So,
MIGRATE_CMA fallback should be handled separately. This patch
introduces __rmqueue_cma_fallback(), that just wrapper of
__rmqueue_smallest() and call it before __rmqueue_fallback()
if migratetype == MIGRATE_MOVABLE.

This results in unintended behaviour change that MIGRATE_CMA freepage
is always used first rather than other migratetype as movable
allocation's fallback. But, as already mentioned above,
MIGRATE_CMA can be used only for MIGRATE_MOVABLE, so it is better
to use MIGRATE_CMA freepage first as much as possible. Otherwise,
we needlessly take up precious freepages with other migratetype and
increase chance of fragmentation.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 mm/page_alloc.c | 36 +++++++++++++++++++-----------------
 1 file changed, 19 insertions(+), 17 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8d52ab1..e64b260 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1029,11 +1029,9 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
 static int fallbacks[MIGRATE_TYPES][4] = {
 	[MIGRATE_UNMOVABLE]   = { MIGRATE_RECLAIMABLE, MIGRATE_MOVABLE,     MIGRATE_RESERVE },
 	[MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE,   MIGRATE_MOVABLE,     MIGRATE_RESERVE },
+	[MIGRATE_MOVABLE]     = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE,   MIGRATE_RESERVE },
 #ifdef CONFIG_CMA
-	[MIGRATE_MOVABLE]     = { MIGRATE_CMA,         MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_RESERVE },
 	[MIGRATE_CMA]         = { MIGRATE_RESERVE }, /* Never used */
-#else
-	[MIGRATE_MOVABLE]     = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE,   MIGRATE_RESERVE },
 #endif
 	[MIGRATE_RESERVE]     = { MIGRATE_RESERVE }, /* Never used */
 #ifdef CONFIG_MEMORY_ISOLATION
@@ -1041,6 +1039,17 @@ static int fallbacks[MIGRATE_TYPES][4] = {
 #endif
 };
 
+#ifdef CONFIG_CMA
+static struct page *__rmqueue_cma_fallback(struct zone *zone,
+					unsigned int order)
+{
+	return __rmqueue_smallest(zone, order, MIGRATE_CMA);
+}
+#else
+static inline struct page *__rmqueue_cma_fallback(struct zone *zone,
+					unsigned int order) { return NULL; }
+#endif
+
 /*
  * Move the free pages in a range to the free lists of the requested type.
  * Note that start_page and end_pages are not aligned on a pageblock
@@ -1192,19 +1201,8 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
 					struct page, lru);
 			area->nr_free--;
 
-			if (!is_migrate_cma(migratetype)) {
-				try_to_steal_freepages(zone, page,
-							start_migratetype,
-							migratetype);
-			} else {
-				/*
-				 * When borrowing from MIGRATE_CMA, we need to
-				 * release the excess buddy pages to CMA
-				 * itself, and we do not try to steal extra
-				 * free pages.
-				 */
-				buddy_type = migratetype;
-			}
+			try_to_steal_freepages(zone, page, start_migratetype,
+								migratetype);
 
 			/* Remove the page from the freelists */
 			list_del(&page->lru);
@@ -1246,7 +1244,11 @@ retry_reserve:
 	page = __rmqueue_smallest(zone, order, migratetype);
 
 	if (unlikely(!page) && migratetype != MIGRATE_RESERVE) {
-		page = __rmqueue_fallback(zone, order, migratetype);
+		if (migratetype == MIGRATE_MOVABLE)
+			page = __rmqueue_cma_fallback(zone, order);
+
+		if (!page)
+			page = __rmqueue_fallback(zone, order, migratetype);
 
 		/*
 		 * Use MIGRATE_RESERVE rather than fail an allocation. goto
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2015-02-03  6:53 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-02  7:15 [RFC PATCH v3 1/3] mm/cma: change fallback behaviour for CMA freepage Joonsoo Kim
2015-02-02  7:15 ` [RFC PATCH v3 2/3] mm/page_alloc: factor out fallback freepage checking Joonsoo Kim
2015-02-02  9:59   ` Vlastimil Babka
2015-02-02 13:26     ` Joonsoo Kim
2015-02-02 12:56   ` Zhang Yanfei
2015-02-02 13:29     ` Joonsoo Kim
2015-02-02  7:15 ` [RFC PATCH v3 3/3] mm/compaction: enhance compaction finish condition Joonsoo Kim
2015-02-02 10:20   ` Vlastimil Babka
2015-02-02 13:23     ` Joonsoo Kim
2015-02-02 14:07       ` Vlastimil Babka
2015-02-02 13:51     ` Zhang Yanfei
2015-02-02 14:19       ` Vlastimil Babka
2015-02-03  6:55       ` Joonsoo Kim
2015-02-02  7:29 ` [RFC PATCH v3 1/3] mm/cma: change fallback behaviour for CMA freepage Joonsoo Kim
2015-02-02  8:27 ` Vlastimil Babka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).