linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v3 1/3] mm/cma: change fallback behaviour for CMA freepage
@ 2015-02-02  7:15 Joonsoo Kim
  2015-02-02  7:15 ` [RFC PATCH v3 2/3] mm/page_alloc: factor out fallback freepage checking Joonsoo Kim
                   ` (3 more replies)
  0 siblings, 4 replies; 15+ messages in thread
From: Joonsoo Kim @ 2015-02-02  7:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, David Rientjes, Rik van Riel, linux-mm, linux-kernel,
	Zhang Yanfei, Joonsoo Kim

freepage with MIGRATE_CMA can be used only for MIGRATE_MOVABLE and
they should not be expanded to other migratetype buddy list
to protect them from unmovable/reclaimable allocation. Implementing
these requirements in __rmqueue_fallback(), that is, finding largest
possible block of freepage has bad effect that high order freepage
with MIGRATE_CMA are broken continually although there are suitable
order CMA freepage. Reason is that they are not be expanded to other
migratetype buddy list and next __rmqueue_fallback() invocation try to
finds another largest block of freepage and break it again. So,
MIGRATE_CMA fallback should be handled separately. This patch
introduces __rmqueue_cma_fallback(), that just wrapper of
__rmqueue_smallest() and call it before __rmqueue_fallback()
if migratetype == MIGRATE_MOVABLE.

This results in unintended behaviour change that MIGRATE_CMA freepage
is always used first rather than other migratetype as movable
allocation's fallback. But, as already mentioned above,
MIGRATE_CMA can be used only for MIGRATE_MOVABLE, so it is better
to use MIGRATE_CMA freepage first as much as possible. Otherwise,
we needlessly take up precious freepages with other migratetype and
increase chance of fragmentation.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 mm/page_alloc.c | 36 +++++++++++++++++++-----------------
 1 file changed, 19 insertions(+), 17 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8d52ab1..e64b260 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1029,11 +1029,9 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
 static int fallbacks[MIGRATE_TYPES][4] = {
 	[MIGRATE_UNMOVABLE]   = { MIGRATE_RECLAIMABLE, MIGRATE_MOVABLE,     MIGRATE_RESERVE },
 	[MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE,   MIGRATE_MOVABLE,     MIGRATE_RESERVE },
+	[MIGRATE_MOVABLE]     = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE,   MIGRATE_RESERVE },
 #ifdef CONFIG_CMA
-	[MIGRATE_MOVABLE]     = { MIGRATE_CMA,         MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_RESERVE },
 	[MIGRATE_CMA]         = { MIGRATE_RESERVE }, /* Never used */
-#else
-	[MIGRATE_MOVABLE]     = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE,   MIGRATE_RESERVE },
 #endif
 	[MIGRATE_RESERVE]     = { MIGRATE_RESERVE }, /* Never used */
 #ifdef CONFIG_MEMORY_ISOLATION
@@ -1041,6 +1039,17 @@ static int fallbacks[MIGRATE_TYPES][4] = {
 #endif
 };
 
+#ifdef CONFIG_CMA
+static struct page *__rmqueue_cma_fallback(struct zone *zone,
+					unsigned int order)
+{
+	return __rmqueue_smallest(zone, order, MIGRATE_CMA);
+}
+#else
+static inline struct page *__rmqueue_cma_fallback(struct zone *zone,
+					unsigned int order) { return NULL; }
+#endif
+
 /*
  * Move the free pages in a range to the free lists of the requested type.
  * Note that start_page and end_pages are not aligned on a pageblock
@@ -1192,19 +1201,8 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
 					struct page, lru);
 			area->nr_free--;
 
-			if (!is_migrate_cma(migratetype)) {
-				try_to_steal_freepages(zone, page,
-							start_migratetype,
-							migratetype);
-			} else {
-				/*
-				 * When borrowing from MIGRATE_CMA, we need to
-				 * release the excess buddy pages to CMA
-				 * itself, and we do not try to steal extra
-				 * free pages.
-				 */
-				buddy_type = migratetype;
-			}
+			try_to_steal_freepages(zone, page, start_migratetype,
+								migratetype);
 
 			/* Remove the page from the freelists */
 			list_del(&page->lru);
@@ -1246,7 +1244,11 @@ retry_reserve:
 	page = __rmqueue_smallest(zone, order, migratetype);
 
 	if (unlikely(!page) && migratetype != MIGRATE_RESERVE) {
-		page = __rmqueue_fallback(zone, order, migratetype);
+		if (migratetype == MIGRATE_MOVABLE)
+			page = __rmqueue_cma_fallback(zone, order);
+
+		if (!page)
+			page = __rmqueue_fallback(zone, order, migratetype);
 
 		/*
 		 * Use MIGRATE_RESERVE rather than fail an allocation. goto
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC PATCH v3 2/3] mm/page_alloc: factor out fallback freepage checking
  2015-02-02  7:15 [RFC PATCH v3 1/3] mm/cma: change fallback behaviour for CMA freepage Joonsoo Kim
@ 2015-02-02  7:15 ` Joonsoo Kim
  2015-02-02  9:59   ` Vlastimil Babka
  2015-02-02 12:56   ` Zhang Yanfei
  2015-02-02  7:15 ` [RFC PATCH v3 3/3] mm/compaction: enhance compaction finish condition Joonsoo Kim
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 15+ messages in thread
From: Joonsoo Kim @ 2015-02-02  7:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, David Rientjes, Rik van Riel, linux-mm, linux-kernel,
	Zhang Yanfei, Joonsoo Kim

This is preparation step to use page allocator's anti fragmentation logic
in compaction. This patch just separates fallback freepage checking part
from fallback freepage management part. Therefore, there is no functional
change.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 mm/page_alloc.c | 128 +++++++++++++++++++++++++++++++++-----------------------
 1 file changed, 76 insertions(+), 52 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e64b260..6cb18f8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1142,14 +1142,26 @@ static void change_pageblock_range(struct page *pageblock_page,
  * as fragmentation caused by those allocations polluting movable pageblocks
  * is worse than movable allocations stealing from unmovable and reclaimable
  * pageblocks.
- *
- * If we claim more than half of the pageblock, change pageblock's migratetype
- * as well.
  */
-static void try_to_steal_freepages(struct zone *zone, struct page *page,
-				  int start_type, int fallback_type)
+static bool can_steal_fallback(unsigned int order, int start_mt)
+{
+	if (order >= pageblock_order)
+		return true;
+
+	if (order >= pageblock_order / 2 ||
+		start_mt == MIGRATE_RECLAIMABLE ||
+		start_mt == MIGRATE_UNMOVABLE ||
+		page_group_by_mobility_disabled)
+		return true;
+
+	return false;
+}
+
+static void steal_suitable_fallback(struct zone *zone, struct page *page,
+							  int start_type)
 {
 	int current_order = page_order(page);
+	int pages;
 
 	/* Take ownership for orders >= pageblock_order */
 	if (current_order >= pageblock_order) {
@@ -1157,19 +1169,39 @@ static void try_to_steal_freepages(struct zone *zone, struct page *page,
 		return;
 	}
 
-	if (current_order >= pageblock_order / 2 ||
-	    start_type == MIGRATE_RECLAIMABLE ||
-	    start_type == MIGRATE_UNMOVABLE ||
-	    page_group_by_mobility_disabled) {
-		int pages;
+	pages = move_freepages_block(zone, page, start_type);
 
-		pages = move_freepages_block(zone, page, start_type);
+	/* Claim the whole block if over half of it is free */
+	if (pages >= (1 << (pageblock_order-1)) ||
+			page_group_by_mobility_disabled)
+		set_pageblock_migratetype(page, start_type);
+}
 
-		/* Claim the whole block if over half of it is free */
-		if (pages >= (1 << (pageblock_order-1)) ||
-				page_group_by_mobility_disabled)
-			set_pageblock_migratetype(page, start_type);
+static int find_suitable_fallback(struct free_area *area, unsigned int order,
+					int migratetype, bool *can_steal)
+{
+	int i;
+	int fallback_mt;
+
+	if (area->nr_free == 0)
+		return -1;
+
+	*can_steal = false;
+	for (i = 0;; i++) {
+		fallback_mt = fallbacks[migratetype][i];
+		if (fallback_mt == MIGRATE_RESERVE)
+			break;
+
+		if (list_empty(&area->free_list[fallback_mt]))
+			continue;
+
+		if (can_steal_fallback(order, migratetype))
+			*can_steal = true;
+
+		return i;
 	}
+
+	return -1;
 }
 
 /* Remove an element from the buddy allocator from the fallback list */
@@ -1179,53 +1211,45 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
 	struct free_area *area;
 	unsigned int current_order;
 	struct page *page;
+	int fallback_mt;
+	bool can_steal;
 
 	/* Find the largest possible block of pages in the other list */
 	for (current_order = MAX_ORDER-1;
 				current_order >= order && current_order <= MAX_ORDER-1;
 				--current_order) {
-		int i;
-		for (i = 0;; i++) {
-			int migratetype = fallbacks[start_migratetype][i];
-			int buddy_type = start_migratetype;
-
-			/* MIGRATE_RESERVE handled later if necessary */
-			if (migratetype == MIGRATE_RESERVE)
-				break;
-
-			area = &(zone->free_area[current_order]);
-			if (list_empty(&area->free_list[migratetype]))
-				continue;
-
-			page = list_entry(area->free_list[migratetype].next,
-					struct page, lru);
-			area->nr_free--;
+		area = &(zone->free_area[current_order]);
+		fallback_mt = find_suitable_fallback(area, current_order,
+				start_migratetype, &can_steal);
+		if (fallback_mt == -1)
+			continue;
 
-			try_to_steal_freepages(zone, page, start_migratetype,
-								migratetype);
+		page = list_entry(area->free_list[fallback_mt].next,
+						struct page, lru);
+		if (can_steal)
+			steal_suitable_fallback(zone, page, start_migratetype);
 
-			/* Remove the page from the freelists */
-			list_del(&page->lru);
-			rmv_page_order(page);
-
-			expand(zone, page, order, current_order, area,
-					buddy_type);
+		/* Remove the page from the freelists */
+		area->nr_free--;
+		list_del(&page->lru);
+		rmv_page_order(page);
 
-			/*
-			 * The freepage_migratetype may differ from pageblock's
-			 * migratetype depending on the decisions in
-			 * try_to_steal_freepages(). This is OK as long as it
-			 * does not differ for MIGRATE_CMA pageblocks. For CMA
-			 * we need to make sure unallocated pages flushed from
-			 * pcp lists are returned to the correct freelist.
-			 */
-			set_freepage_migratetype(page, buddy_type);
+		expand(zone, page, order, current_order, area,
+					start_migratetype);
+		/*
+		 * The freepage_migratetype may differ from pageblock's
+		 * migratetype depending on the decisions in
+		 * try_to_steal_freepages(). This is OK as long as it
+		 * does not differ for MIGRATE_CMA pageblocks. For CMA
+		 * we need to make sure unallocated pages flushed from
+		 * pcp lists are returned to the correct freelist.
+		 */
+		set_freepage_migratetype(page, start_migratetype);
 
-			trace_mm_page_alloc_extfrag(page, order, current_order,
-				start_migratetype, migratetype);
+		trace_mm_page_alloc_extfrag(page, order, current_order,
+			start_migratetype, fallback_mt);
 
-			return page;
-		}
+		return page;
 	}
 
 	return NULL;
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [RFC PATCH v3 3/3] mm/compaction: enhance compaction finish condition
  2015-02-02  7:15 [RFC PATCH v3 1/3] mm/cma: change fallback behaviour for CMA freepage Joonsoo Kim
  2015-02-02  7:15 ` [RFC PATCH v3 2/3] mm/page_alloc: factor out fallback freepage checking Joonsoo Kim
@ 2015-02-02  7:15 ` Joonsoo Kim
  2015-02-02 10:20   ` Vlastimil Babka
  2015-02-02  7:29 ` [RFC PATCH v3 1/3] mm/cma: change fallback behaviour for CMA freepage Joonsoo Kim
  2015-02-02  8:27 ` Vlastimil Babka
  3 siblings, 1 reply; 15+ messages in thread
From: Joonsoo Kim @ 2015-02-02  7:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, David Rientjes, Rik van Riel, linux-mm, linux-kernel,
	Zhang Yanfei, Joonsoo Kim

Compaction has anti fragmentation algorithm. It is that freepage
should be more than pageblock order to finish the compaction if we don't
find any freepage in requested migratetype buddy list. This is for
mitigating fragmentation, but, there is a lack of migratetype
consideration and it is too excessive compared to page allocator's anti
fragmentation algorithm.

Not considering migratetype would cause premature finish of compaction.
For example, if allocation request is for unmovable migratetype,
freepage with CMA migratetype doesn't help that allocation and
compaction should not be stopped. But, current logic regards this
situation as compaction is no longer needed, so finish the compaction.

Secondly, condition is too excessive compared to page allocator's logic.
We can steal freepage from other migratetype and change pageblock
migratetype on more relaxed conditions in page allocator. This is designed
to prevent fragmentation and we can use it here. Imposing hard constraint
only to the compaction doesn't help much in this case since page allocator
would cause fragmentation again.

To solve these problems, this patch borrows anti fragmentation logic from
page allocator. It will reduce premature compaction finish in some cases
and reduce excessive compaction work.

stress-highalloc test in mmtests with non movable order 7 allocation shows
considerable increase of compaction success rate.

Compaction success rate (Compaction success * 100 / Compaction stalls, %)
31.82 : 42.20

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 mm/compaction.c | 14 ++++++++++++--
 mm/internal.h   |  2 ++
 mm/page_alloc.c | 12 ++++++++----
 3 files changed, 22 insertions(+), 6 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 782772d..d40c426 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1170,13 +1170,23 @@ static int __compact_finished(struct zone *zone, struct compact_control *cc,
 	/* Direct compactor: Is a suitable page free? */
 	for (order = cc->order; order < MAX_ORDER; order++) {
 		struct free_area *area = &zone->free_area[order];
+		bool can_steal;
 
 		/* Job done if page is free of the right migratetype */
 		if (!list_empty(&area->free_list[migratetype]))
 			return COMPACT_PARTIAL;
 
-		/* Job done if allocation would set block type */
-		if (order >= pageblock_order && area->nr_free)
+		/* MIGRATE_MOVABLE can fallback on MIGRATE_CMA */
+		if (migratetype == MIGRATE_MOVABLE &&
+			!list_empty(&area->free_list[MIGRATE_CMA]))
+			return COMPACT_PARTIAL;
+
+		/*
+		 * Job done if allocation would steal freepages from
+		 * other migratetype buddy lists.
+		 */
+		if (find_suitable_fallback(area, order, migratetype,
+						true, &can_steal) != -1)
 			return COMPACT_PARTIAL;
 	}
 
diff --git a/mm/internal.h b/mm/internal.h
index c4d6c9b..9640650 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -200,6 +200,8 @@ isolate_freepages_range(struct compact_control *cc,
 unsigned long
 isolate_migratepages_range(struct compact_control *cc,
 			   unsigned long low_pfn, unsigned long end_pfn);
+int find_suitable_fallback(struct free_area *area, unsigned int order,
+			int migratetype, bool only_stealable, bool *can_steal);
 
 #endif
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6cb18f8..0a150f1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1177,8 +1177,8 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
 		set_pageblock_migratetype(page, start_type);
 }
 
-static int find_suitable_fallback(struct free_area *area, unsigned int order,
-					int migratetype, bool *can_steal)
+int find_suitable_fallback(struct free_area *area, unsigned int order,
+			int migratetype, bool only_stealable, bool *can_steal)
 {
 	int i;
 	int fallback_mt;
@@ -1198,7 +1198,11 @@ static int find_suitable_fallback(struct free_area *area, unsigned int order,
 		if (can_steal_fallback(order, migratetype))
 			*can_steal = true;
 
-		return i;
+		if (!only_stealable)
+			return i;
+
+		if (*can_steal)
+			return i;
 	}
 
 	return -1;
@@ -1220,7 +1224,7 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
 				--current_order) {
 		area = &(zone->free_area[current_order]);
 		fallback_mt = find_suitable_fallback(area, current_order,
-				start_migratetype, &can_steal);
+				start_migratetype, false, &can_steal);
 		if (fallback_mt == -1)
 			continue;
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v3 1/3] mm/cma: change fallback behaviour for CMA freepage
  2015-02-02  7:15 [RFC PATCH v3 1/3] mm/cma: change fallback behaviour for CMA freepage Joonsoo Kim
  2015-02-02  7:15 ` [RFC PATCH v3 2/3] mm/page_alloc: factor out fallback freepage checking Joonsoo Kim
  2015-02-02  7:15 ` [RFC PATCH v3 3/3] mm/compaction: enhance compaction finish condition Joonsoo Kim
@ 2015-02-02  7:29 ` Joonsoo Kim
  2015-02-02  8:27 ` Vlastimil Babka
  3 siblings, 0 replies; 15+ messages in thread
From: Joonsoo Kim @ 2015-02-02  7:29 UTC (permalink / raw)
  To: Andrew Morton, Vlastimil Babka
  Cc: Mel Gorman, David Rientjes, Rik van Riel, linux-mm, linux-kernel,
	Zhang Yanfei

On Mon, Feb 02, 2015 at 04:15:46PM +0900, Joonsoo Kim wrote:
> freepage with MIGRATE_CMA can be used only for MIGRATE_MOVABLE and
> they should not be expanded to other migratetype buddy list
> to protect them from unmovable/reclaimable allocation. Implementing
> these requirements in __rmqueue_fallback(), that is, finding largest
> possible block of freepage has bad effect that high order freepage
> with MIGRATE_CMA are broken continually although there are suitable
> order CMA freepage. Reason is that they are not be expanded to other
> migratetype buddy list and next __rmqueue_fallback() invocation try to
> finds another largest block of freepage and break it again. So,
> MIGRATE_CMA fallback should be handled separately. This patch
> introduces __rmqueue_cma_fallback(), that just wrapper of
> __rmqueue_smallest() and call it before __rmqueue_fallback()
> if migratetype == MIGRATE_MOVABLE.
> 
> This results in unintended behaviour change that MIGRATE_CMA freepage
> is always used first rather than other migratetype as movable
> allocation's fallback. But, as already mentioned above,
> MIGRATE_CMA can be used only for MIGRATE_MOVABLE, so it is better
> to use MIGRATE_CMA freepage first as much as possible. Otherwise,
> we needlessly take up precious freepages with other migratetype and
> increase chance of fragmentation.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> ---

Hello, Vlastimil.

This RFC is targeted to you, but, I mistakenly omit your e-mail
on CC list. Sorry about that. :/

How about this v3 which try to clean-up __rmqueue_fallback() much more?

Thanks.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v3 1/3] mm/cma: change fallback behaviour for CMA freepage
  2015-02-02  7:15 [RFC PATCH v3 1/3] mm/cma: change fallback behaviour for CMA freepage Joonsoo Kim
                   ` (2 preceding siblings ...)
  2015-02-02  7:29 ` [RFC PATCH v3 1/3] mm/cma: change fallback behaviour for CMA freepage Joonsoo Kim
@ 2015-02-02  8:27 ` Vlastimil Babka
  3 siblings, 0 replies; 15+ messages in thread
From: Vlastimil Babka @ 2015-02-02  8:27 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton
  Cc: Mel Gorman, David Rientjes, Rik van Riel, linux-mm, linux-kernel,
	Zhang Yanfei, Joonsoo Kim

On 02/02/2015 08:15 AM, Joonsoo Kim wrote:
> freepage with MIGRATE_CMA can be used only for MIGRATE_MOVABLE and
> they should not be expanded to other migratetype buddy list
> to protect them from unmovable/reclaimable allocation. Implementing
> these requirements in __rmqueue_fallback(), that is, finding largest
> possible block of freepage has bad effect that high order freepage
> with MIGRATE_CMA are broken continually although there are suitable
> order CMA freepage. Reason is that they are not be expanded to other
> migratetype buddy list and next __rmqueue_fallback() invocation try to
> finds another largest block of freepage and break it again. So,
> MIGRATE_CMA fallback should be handled separately. This patch
> introduces __rmqueue_cma_fallback(), that just wrapper of
> __rmqueue_smallest() and call it before __rmqueue_fallback()
> if migratetype == MIGRATE_MOVABLE.
> 
> This results in unintended behaviour change that MIGRATE_CMA freepage
> is always used first rather than other migratetype as movable
> allocation's fallback. But, as already mentioned above,
> MIGRATE_CMA can be used only for MIGRATE_MOVABLE, so it is better
> to use MIGRATE_CMA freepage first as much as possible. Otherwise,
> we needlessly take up precious freepages with other migratetype and
> increase chance of fragmentation.

This makes a lot of sense to me. We could go as far as having __rmqueue_smallest
consider both MOVABLE and CMA simultaneously and pick the smallest block
available between the two. But that would make the fast path more complex, so
this could be enough. Hope it survives the scrutiny of CMA success testing :)

> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  mm/page_alloc.c | 36 +++++++++++++++++++-----------------
>  1 file changed, 19 insertions(+), 17 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 8d52ab1..e64b260 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1029,11 +1029,9 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order,
>  static int fallbacks[MIGRATE_TYPES][4] = {
>  	[MIGRATE_UNMOVABLE]   = { MIGRATE_RECLAIMABLE, MIGRATE_MOVABLE,     MIGRATE_RESERVE },
>  	[MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE,   MIGRATE_MOVABLE,     MIGRATE_RESERVE },
> +	[MIGRATE_MOVABLE]     = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE,   MIGRATE_RESERVE },
>  #ifdef CONFIG_CMA
> -	[MIGRATE_MOVABLE]     = { MIGRATE_CMA,         MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_RESERVE },
>  	[MIGRATE_CMA]         = { MIGRATE_RESERVE }, /* Never used */
> -#else
> -	[MIGRATE_MOVABLE]     = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE,   MIGRATE_RESERVE },
>  #endif
>  	[MIGRATE_RESERVE]     = { MIGRATE_RESERVE }, /* Never used */
>  #ifdef CONFIG_MEMORY_ISOLATION
> @@ -1041,6 +1039,17 @@ static int fallbacks[MIGRATE_TYPES][4] = {
>  #endif
>  };
>  
> +#ifdef CONFIG_CMA
> +static struct page *__rmqueue_cma_fallback(struct zone *zone,
> +					unsigned int order)
> +{
> +	return __rmqueue_smallest(zone, order, MIGRATE_CMA);
> +}
> +#else
> +static inline struct page *__rmqueue_cma_fallback(struct zone *zone,
> +					unsigned int order) { return NULL; }
> +#endif
> +
>  /*
>   * Move the free pages in a range to the free lists of the requested type.
>   * Note that start_page and end_pages are not aligned on a pageblock
> @@ -1192,19 +1201,8 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
>  					struct page, lru);
>  			area->nr_free--;
>  
> -			if (!is_migrate_cma(migratetype)) {
> -				try_to_steal_freepages(zone, page,
> -							start_migratetype,
> -							migratetype);
> -			} else {
> -				/*
> -				 * When borrowing from MIGRATE_CMA, we need to
> -				 * release the excess buddy pages to CMA
> -				 * itself, and we do not try to steal extra
> -				 * free pages.
> -				 */
> -				buddy_type = migratetype;
> -			}
> +			try_to_steal_freepages(zone, page, start_migratetype,
> +								migratetype);
>  
>  			/* Remove the page from the freelists */
>  			list_del(&page->lru);
> @@ -1246,7 +1244,11 @@ retry_reserve:
>  	page = __rmqueue_smallest(zone, order, migratetype);
>  
>  	if (unlikely(!page) && migratetype != MIGRATE_RESERVE) {
> -		page = __rmqueue_fallback(zone, order, migratetype);
> +		if (migratetype == MIGRATE_MOVABLE)
> +			page = __rmqueue_cma_fallback(zone, order);
> +
> +		if (!page)
> +			page = __rmqueue_fallback(zone, order, migratetype);
>  
>  		/*
>  		 * Use MIGRATE_RESERVE rather than fail an allocation. goto
> 


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v3 2/3] mm/page_alloc: factor out fallback freepage checking
  2015-02-02  7:15 ` [RFC PATCH v3 2/3] mm/page_alloc: factor out fallback freepage checking Joonsoo Kim
@ 2015-02-02  9:59   ` Vlastimil Babka
  2015-02-02 13:26     ` Joonsoo Kim
  2015-02-02 12:56   ` Zhang Yanfei
  1 sibling, 1 reply; 15+ messages in thread
From: Vlastimil Babka @ 2015-02-02  9:59 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton
  Cc: Mel Gorman, David Rientjes, Rik van Riel, linux-mm, linux-kernel,
	Zhang Yanfei, Joonsoo Kim

On 02/02/2015 08:15 AM, Joonsoo Kim wrote:
> This is preparation step to use page allocator's anti fragmentation logic
> in compaction. This patch just separates fallback freepage checking part
> from fallback freepage management part. Therefore, there is no functional
> change.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> ---
>  mm/page_alloc.c | 128 +++++++++++++++++++++++++++++++++-----------------------
>  1 file changed, 76 insertions(+), 52 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index e64b260..6cb18f8 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1142,14 +1142,26 @@ static void change_pageblock_range(struct page *pageblock_page,
>   * as fragmentation caused by those allocations polluting movable pageblocks
>   * is worse than movable allocations stealing from unmovable and reclaimable
>   * pageblocks.
> - *
> - * If we claim more than half of the pageblock, change pageblock's migratetype
> - * as well.
>   */
> -static void try_to_steal_freepages(struct zone *zone, struct page *page,
> -				  int start_type, int fallback_type)
> +static bool can_steal_fallback(unsigned int order, int start_mt)
> +{
> +	if (order >= pageblock_order)
> +		return true;
> +
> +	if (order >= pageblock_order / 2 ||
> +		start_mt == MIGRATE_RECLAIMABLE ||
> +		start_mt == MIGRATE_UNMOVABLE ||
> +		page_group_by_mobility_disabled)
> +		return true;
> +
> +	return false;
> +}
> +
> +static void steal_suitable_fallback(struct zone *zone, struct page *page,
> +							  int start_type)

Some comment about the function please?

>  {
>  	int current_order = page_order(page);
> +	int pages;
>  
>  	/* Take ownership for orders >= pageblock_order */
>  	if (current_order >= pageblock_order) {
> @@ -1157,19 +1169,39 @@ static void try_to_steal_freepages(struct zone *zone, struct page *page,
>  		return;
>  	}
>  
> -	if (current_order >= pageblock_order / 2 ||
> -	    start_type == MIGRATE_RECLAIMABLE ||
> -	    start_type == MIGRATE_UNMOVABLE ||
> -	    page_group_by_mobility_disabled) {
> -		int pages;
> +	pages = move_freepages_block(zone, page, start_type);
>  
> -		pages = move_freepages_block(zone, page, start_type);
> +	/* Claim the whole block if over half of it is free */
> +	if (pages >= (1 << (pageblock_order-1)) ||
> +			page_group_by_mobility_disabled)
> +		set_pageblock_migratetype(page, start_type);
> +}
>  
> -		/* Claim the whole block if over half of it is free */
> -		if (pages >= (1 << (pageblock_order-1)) ||
> -				page_group_by_mobility_disabled)
> -			set_pageblock_migratetype(page, start_type);
> +static int find_suitable_fallback(struct free_area *area, unsigned int order,
> +					int migratetype, bool *can_steal)

Same here.

> +{
> +	int i;
> +	int fallback_mt;
> +
> +	if (area->nr_free == 0)
> +		return -1;
> +
> +	*can_steal = false;
> +	for (i = 0;; i++) {
> +		fallback_mt = fallbacks[migratetype][i];
> +		if (fallback_mt == MIGRATE_RESERVE)
> +			break;
> +
> +		if (list_empty(&area->free_list[fallback_mt]))
> +			continue;
> +
> +		if (can_steal_fallback(order, migratetype))
> +			*can_steal = true;
> +
> +		return i;

You want to return fallback_mt, not 'i', no?

>  	}
> +
> +	return -1;
>  }
>  
>  /* Remove an element from the buddy allocator from the fallback list */
> @@ -1179,53 +1211,45 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
>  	struct free_area *area;
>  	unsigned int current_order;
>  	struct page *page;
> +	int fallback_mt;
> +	bool can_steal;
>  
>  	/* Find the largest possible block of pages in the other list */
>  	for (current_order = MAX_ORDER-1;
>  				current_order >= order && current_order <= MAX_ORDER-1;
>  				--current_order) {
> -		int i;
> -		for (i = 0;; i++) {
> -			int migratetype = fallbacks[start_migratetype][i];
> -			int buddy_type = start_migratetype;
> -
> -			/* MIGRATE_RESERVE handled later if necessary */
> -			if (migratetype == MIGRATE_RESERVE)
> -				break;
> -
> -			area = &(zone->free_area[current_order]);
> -			if (list_empty(&area->free_list[migratetype]))
> -				continue;
> -
> -			page = list_entry(area->free_list[migratetype].next,
> -					struct page, lru);
> -			area->nr_free--;
> +		area = &(zone->free_area[current_order]);
> +		fallback_mt = find_suitable_fallback(area, current_order,
> +				start_migratetype, &can_steal);
> +		if (fallback_mt == -1)
> +			continue;
>  
> -			try_to_steal_freepages(zone, page, start_migratetype,
> -								migratetype);
> +		page = list_entry(area->free_list[fallback_mt].next,
> +						struct page, lru);
> +		if (can_steal)
> +			steal_suitable_fallback(zone, page, start_migratetype);
>  
> -			/* Remove the page from the freelists */
> -			list_del(&page->lru);
> -			rmv_page_order(page);
> -
> -			expand(zone, page, order, current_order, area,
> -					buddy_type);
> +		/* Remove the page from the freelists */
> +		area->nr_free--;
> +		list_del(&page->lru);
> +		rmv_page_order(page);
>  
> -			/*
> -			 * The freepage_migratetype may differ from pageblock's
> -			 * migratetype depending on the decisions in
> -			 * try_to_steal_freepages(). This is OK as long as it
> -			 * does not differ for MIGRATE_CMA pageblocks. For CMA
> -			 * we need to make sure unallocated pages flushed from
> -			 * pcp lists are returned to the correct freelist.
> -			 */
> -			set_freepage_migratetype(page, buddy_type);
> +		expand(zone, page, order, current_order, area,
> +					start_migratetype);
> +		/*
> +		 * The freepage_migratetype may differ from pageblock's
> +		 * migratetype depending on the decisions in
> +		 * try_to_steal_freepages(). This is OK as long as it
> +		 * does not differ for MIGRATE_CMA pageblocks. For CMA
> +		 * we need to make sure unallocated pages flushed from
> +		 * pcp lists are returned to the correct freelist.
> +		 */
> +		set_freepage_migratetype(page, start_migratetype);
>  
> -			trace_mm_page_alloc_extfrag(page, order, current_order,
> -				start_migratetype, migratetype);
> +		trace_mm_page_alloc_extfrag(page, order, current_order,
> +			start_migratetype, fallback_mt);
>  
> -			return page;
> -		}
> +		return page;
>  	}
>  
>  	return NULL;
> 


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v3 3/3] mm/compaction: enhance compaction finish condition
  2015-02-02  7:15 ` [RFC PATCH v3 3/3] mm/compaction: enhance compaction finish condition Joonsoo Kim
@ 2015-02-02 10:20   ` Vlastimil Babka
  2015-02-02 13:23     ` Joonsoo Kim
  2015-02-02 13:51     ` Zhang Yanfei
  0 siblings, 2 replies; 15+ messages in thread
From: Vlastimil Babka @ 2015-02-02 10:20 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton
  Cc: Mel Gorman, David Rientjes, Rik van Riel, linux-mm, linux-kernel,
	Zhang Yanfei, Joonsoo Kim

On 02/02/2015 08:15 AM, Joonsoo Kim wrote:
> Compaction has anti fragmentation algorithm. It is that freepage
> should be more than pageblock order to finish the compaction if we don't
> find any freepage in requested migratetype buddy list. This is for
> mitigating fragmentation, but, there is a lack of migratetype
> consideration and it is too excessive compared to page allocator's anti
> fragmentation algorithm.
> 
> Not considering migratetype would cause premature finish of compaction.
> For example, if allocation request is for unmovable migratetype,
> freepage with CMA migratetype doesn't help that allocation and
> compaction should not be stopped. But, current logic regards this
> situation as compaction is no longer needed, so finish the compaction.

This is only for order >= pageblock_order, right? Perhaps should be told explicitly.

> Secondly, condition is too excessive compared to page allocator's logic.
> We can steal freepage from other migratetype and change pageblock
> migratetype on more relaxed conditions in page allocator. This is designed
> to prevent fragmentation and we can use it here. Imposing hard constraint
> only to the compaction doesn't help much in this case since page allocator
> would cause fragmentation again.
> 
> To solve these problems, this patch borrows anti fragmentation logic from
> page allocator. It will reduce premature compaction finish in some cases
> and reduce excessive compaction work.
> 
> stress-highalloc test in mmtests with non movable order 7 allocation shows
> considerable increase of compaction success rate.
> 
> Compaction success rate (Compaction success * 100 / Compaction stalls, %)
> 31.82 : 42.20
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> ---
>  mm/compaction.c | 14 ++++++++++++--
>  mm/internal.h   |  2 ++
>  mm/page_alloc.c | 12 ++++++++----
>  3 files changed, 22 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 782772d..d40c426 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1170,13 +1170,23 @@ static int __compact_finished(struct zone *zone, struct compact_control *cc,
>  	/* Direct compactor: Is a suitable page free? */
>  	for (order = cc->order; order < MAX_ORDER; order++) {
>  		struct free_area *area = &zone->free_area[order];
> +		bool can_steal;
>  
>  		/* Job done if page is free of the right migratetype */
>  		if (!list_empty(&area->free_list[migratetype]))
>  			return COMPACT_PARTIAL;
>  
> -		/* Job done if allocation would set block type */
> -		if (order >= pageblock_order && area->nr_free)
> +		/* MIGRATE_MOVABLE can fallback on MIGRATE_CMA */
> +		if (migratetype == MIGRATE_MOVABLE &&
> +			!list_empty(&area->free_list[MIGRATE_CMA]))
> +			return COMPACT_PARTIAL;

The above AFAICS needs #ifdef CMA otherwise won't compile without CMA.

> +
> +		/*
> +		 * Job done if allocation would steal freepages from
> +		 * other migratetype buddy lists.
> +		 */
> +		if (find_suitable_fallback(area, order, migratetype,
> +						true, &can_steal) != -1)
>  			return COMPACT_PARTIAL;
>  	}
>  
> diff --git a/mm/internal.h b/mm/internal.h
> index c4d6c9b..9640650 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -200,6 +200,8 @@ isolate_freepages_range(struct compact_control *cc,
>  unsigned long
>  isolate_migratepages_range(struct compact_control *cc,
>  			   unsigned long low_pfn, unsigned long end_pfn);
> +int find_suitable_fallback(struct free_area *area, unsigned int order,
> +			int migratetype, bool only_stealable, bool *can_steal);
>  
>  #endif
>  
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 6cb18f8..0a150f1 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1177,8 +1177,8 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>  		set_pageblock_migratetype(page, start_type);
>  }
>  
> -static int find_suitable_fallback(struct free_area *area, unsigned int order,
> -					int migratetype, bool *can_steal)
> +int find_suitable_fallback(struct free_area *area, unsigned int order,
> +			int migratetype, bool only_stealable, bool *can_steal)
>  {
>  	int i;
>  	int fallback_mt;
> @@ -1198,7 +1198,11 @@ static int find_suitable_fallback(struct free_area *area, unsigned int order,
>  		if (can_steal_fallback(order, migratetype))
>  			*can_steal = true;
>  
> -		return i;
> +		if (!only_stealable)
> +			return i;
> +
> +		if (*can_steal)
> +			return i;

So I've realized that this problaby won't always work as intended :/ Because we
still differ from what page allocator does.
Consider we compact for UNMOVABLE allocation. First we try RECLAIMABLE fallback.
Turns out we could fallback, but not steal, hence we skip it due to
only_stealable == true. So we try MOVABLE, and turns out we can steal, so we
finish compaction.
Then the allocation attempt follows, and it will fallback to RECLAIMABLE,
without extra stealing. The compaction decision for MOVABLE was moot.
Is it a big problem? Probably not, the compaction will still perform some extra
anti-fragmentation on average, but we should consider it.

I've got another idea for small improvement. We should only test for fallbacks
when migration scanner has scanned (and migrated) a whole pageblock. Should be a
simple alignment test of cc->migrate_pfn.
Advantages:
- potentially less checking overhead
- chances of stealing increase if we created more free pages for migration
- thus less fragmentation
The cost is a bit more time spent compacting, but it's bounded and worth it
(especially the less fragmentation) IMHO.

>  	}
>  
>  	return -1;
> @@ -1220,7 +1224,7 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
>  				--current_order) {
>  		area = &(zone->free_area[current_order]);
>  		fallback_mt = find_suitable_fallback(area, current_order,
> -				start_migratetype, &can_steal);
> +				start_migratetype, false, &can_steal);
>  		if (fallback_mt == -1)
>  			continue;
>  
> 


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v3 2/3] mm/page_alloc: factor out fallback freepage checking
  2015-02-02  7:15 ` [RFC PATCH v3 2/3] mm/page_alloc: factor out fallback freepage checking Joonsoo Kim
  2015-02-02  9:59   ` Vlastimil Babka
@ 2015-02-02 12:56   ` Zhang Yanfei
  2015-02-02 13:29     ` Joonsoo Kim
  1 sibling, 1 reply; 15+ messages in thread
From: Zhang Yanfei @ 2015-02-02 12:56 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton
  Cc: Mel Gorman, David Rientjes, Rik van Riel, linux-mm, linux-kernel,
	Zhang Yanfei, Joonsoo Kim

Hello Joonsoo,

At 2015/2/2 15:15, Joonsoo Kim wrote:
> This is preparation step to use page allocator's anti fragmentation logic
> in compaction. This patch just separates fallback freepage checking part
> from fallback freepage management part. Therefore, there is no functional
> change.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> ---
>  mm/page_alloc.c | 128 +++++++++++++++++++++++++++++++++-----------------------
>  1 file changed, 76 insertions(+), 52 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index e64b260..6cb18f8 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1142,14 +1142,26 @@ static void change_pageblock_range(struct page *pageblock_page,
>   * as fragmentation caused by those allocations polluting movable pageblocks
>   * is worse than movable allocations stealing from unmovable and reclaimable
>   * pageblocks.
> - *
> - * If we claim more than half of the pageblock, change pageblock's migratetype
> - * as well.
>   */
> -static void try_to_steal_freepages(struct zone *zone, struct page *page,
> -				  int start_type, int fallback_type)
> +static bool can_steal_fallback(unsigned int order, int start_mt)
> +{
> +	if (order >= pageblock_order)
> +		return true;

Is this test necessary? Since an order which is >= pageblock_order
will always pass the order >= pageblock_order / 2 test below.

Thanks.

> +
> +	if (order >= pageblock_order / 2 ||
> +		start_mt == MIGRATE_RECLAIMABLE ||
> +		start_mt == MIGRATE_UNMOVABLE ||
> +		page_group_by_mobility_disabled)
> +		return true;
> +
> +	return false;
> +}
> +
> +static void steal_suitable_fallback(struct zone *zone, struct page *page,
> +							  int start_type)
>  {
>  	int current_order = page_order(page);
> +	int pages;
>  
>  	/* Take ownership for orders >= pageblock_order */
>  	if (current_order >= pageblock_order) {
> @@ -1157,19 +1169,39 @@ static void try_to_steal_freepages(struct zone *zone, struct page *page,
>  		return;
>  	}
>  
> -	if (current_order >= pageblock_order / 2 ||
> -	    start_type == MIGRATE_RECLAIMABLE ||
> -	    start_type == MIGRATE_UNMOVABLE ||
> -	    page_group_by_mobility_disabled) {
> -		int pages;
> +	pages = move_freepages_block(zone, page, start_type);
>  
> -		pages = move_freepages_block(zone, page, start_type);
> +	/* Claim the whole block if over half of it is free */
> +	if (pages >= (1 << (pageblock_order-1)) ||
> +			page_group_by_mobility_disabled)
> +		set_pageblock_migratetype(page, start_type);
> +}
>  
> -		/* Claim the whole block if over half of it is free */
> -		if (pages >= (1 << (pageblock_order-1)) ||
> -				page_group_by_mobility_disabled)
> -			set_pageblock_migratetype(page, start_type);
> +static int find_suitable_fallback(struct free_area *area, unsigned int order,
> +					int migratetype, bool *can_steal)
> +{
> +	int i;
> +	int fallback_mt;
> +
> +	if (area->nr_free == 0)
> +		return -1;
> +
> +	*can_steal = false;
> +	for (i = 0;; i++) {
> +		fallback_mt = fallbacks[migratetype][i];
> +		if (fallback_mt == MIGRATE_RESERVE)
> +			break;
> +
> +		if (list_empty(&area->free_list[fallback_mt]))
> +			continue;
> +
> +		if (can_steal_fallback(order, migratetype))
> +			*can_steal = true;
> +
> +		return i;
>  	}
> +
> +	return -1;
>  }
>  
>  /* Remove an element from the buddy allocator from the fallback list */
> @@ -1179,53 +1211,45 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
>  	struct free_area *area;
>  	unsigned int current_order;
>  	struct page *page;
> +	int fallback_mt;
> +	bool can_steal;
>  
>  	/* Find the largest possible block of pages in the other list */
>  	for (current_order = MAX_ORDER-1;
>  				current_order >= order && current_order <= MAX_ORDER-1;
>  				--current_order) {
> -		int i;
> -		for (i = 0;; i++) {
> -			int migratetype = fallbacks[start_migratetype][i];
> -			int buddy_type = start_migratetype;
> -
> -			/* MIGRATE_RESERVE handled later if necessary */
> -			if (migratetype == MIGRATE_RESERVE)
> -				break;
> -
> -			area = &(zone->free_area[current_order]);
> -			if (list_empty(&area->free_list[migratetype]))
> -				continue;
> -
> -			page = list_entry(area->free_list[migratetype].next,
> -					struct page, lru);
> -			area->nr_free--;
> +		area = &(zone->free_area[current_order]);
> +		fallback_mt = find_suitable_fallback(area, current_order,
> +				start_migratetype, &can_steal);
> +		if (fallback_mt == -1)
> +			continue;
>  
> -			try_to_steal_freepages(zone, page, start_migratetype,
> -								migratetype);
> +		page = list_entry(area->free_list[fallback_mt].next,
> +						struct page, lru);
> +		if (can_steal)
> +			steal_suitable_fallback(zone, page, start_migratetype);
>  
> -			/* Remove the page from the freelists */
> -			list_del(&page->lru);
> -			rmv_page_order(page);
> -
> -			expand(zone, page, order, current_order, area,
> -					buddy_type);
> +		/* Remove the page from the freelists */
> +		area->nr_free--;
> +		list_del(&page->lru);
> +		rmv_page_order(page);
>  
> -			/*
> -			 * The freepage_migratetype may differ from pageblock's
> -			 * migratetype depending on the decisions in
> -			 * try_to_steal_freepages(). This is OK as long as it
> -			 * does not differ for MIGRATE_CMA pageblocks. For CMA
> -			 * we need to make sure unallocated pages flushed from
> -			 * pcp lists are returned to the correct freelist.
> -			 */
> -			set_freepage_migratetype(page, buddy_type);
> +		expand(zone, page, order, current_order, area,
> +					start_migratetype);
> +		/*
> +		 * The freepage_migratetype may differ from pageblock's
> +		 * migratetype depending on the decisions in
> +		 * try_to_steal_freepages(). This is OK as long as it
> +		 * does not differ for MIGRATE_CMA pageblocks. For CMA
> +		 * we need to make sure unallocated pages flushed from
> +		 * pcp lists are returned to the correct freelist.
> +		 */
> +		set_freepage_migratetype(page, start_migratetype);
>  
> -			trace_mm_page_alloc_extfrag(page, order, current_order,
> -				start_migratetype, migratetype);
> +		trace_mm_page_alloc_extfrag(page, order, current_order,
> +			start_migratetype, fallback_mt);
>  
> -			return page;
> -		}
> +		return page;
>  	}
>  
>  	return NULL;
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v3 3/3] mm/compaction: enhance compaction finish condition
  2015-02-02 10:20   ` Vlastimil Babka
@ 2015-02-02 13:23     ` Joonsoo Kim
  2015-02-02 14:07       ` Vlastimil Babka
  2015-02-02 13:51     ` Zhang Yanfei
  1 sibling, 1 reply; 15+ messages in thread
From: Joonsoo Kim @ 2015-02-02 13:23 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Andrew Morton, Mel Gorman, David Rientjes, Rik van Riel,
	Linux Memory Management List, LKML, Zhang Yanfei, Joonsoo Kim

2015-02-02 19:20 GMT+09:00 Vlastimil Babka <vbabka@suse.cz>:
> On 02/02/2015 08:15 AM, Joonsoo Kim wrote:
>> Compaction has anti fragmentation algorithm. It is that freepage
>> should be more than pageblock order to finish the compaction if we don't
>> find any freepage in requested migratetype buddy list. This is for
>> mitigating fragmentation, but, there is a lack of migratetype
>> consideration and it is too excessive compared to page allocator's anti
>> fragmentation algorithm.
>>
>> Not considering migratetype would cause premature finish of compaction.
>> For example, if allocation request is for unmovable migratetype,
>> freepage with CMA migratetype doesn't help that allocation and
>> compaction should not be stopped. But, current logic regards this
>> situation as compaction is no longer needed, so finish the compaction.
>
> This is only for order >= pageblock_order, right? Perhaps should be told explicitly.

Okay.

>> Secondly, condition is too excessive compared to page allocator's logic.
>> We can steal freepage from other migratetype and change pageblock
>> migratetype on more relaxed conditions in page allocator. This is designed
>> to prevent fragmentation and we can use it here. Imposing hard constraint
>> only to the compaction doesn't help much in this case since page allocator
>> would cause fragmentation again.
>>
>> To solve these problems, this patch borrows anti fragmentation logic from
>> page allocator. It will reduce premature compaction finish in some cases
>> and reduce excessive compaction work.
>>
>> stress-highalloc test in mmtests with non movable order 7 allocation shows
>> considerable increase of compaction success rate.
>>
>> Compaction success rate (Compaction success * 100 / Compaction stalls, %)
>> 31.82 : 42.20
>>
>> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>> ---
>>  mm/compaction.c | 14 ++++++++++++--
>>  mm/internal.h   |  2 ++
>>  mm/page_alloc.c | 12 ++++++++----
>>  3 files changed, 22 insertions(+), 6 deletions(-)
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index 782772d..d40c426 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -1170,13 +1170,23 @@ static int __compact_finished(struct zone *zone, struct compact_control *cc,
>>       /* Direct compactor: Is a suitable page free? */
>>       for (order = cc->order; order < MAX_ORDER; order++) {
>>               struct free_area *area = &zone->free_area[order];
>> +             bool can_steal;
>>
>>               /* Job done if page is free of the right migratetype */
>>               if (!list_empty(&area->free_list[migratetype]))
>>                       return COMPACT_PARTIAL;
>>
>> -             /* Job done if allocation would set block type */
>> -             if (order >= pageblock_order && area->nr_free)
>> +             /* MIGRATE_MOVABLE can fallback on MIGRATE_CMA */
>> +             if (migratetype == MIGRATE_MOVABLE &&
>> +                     !list_empty(&area->free_list[MIGRATE_CMA]))
>> +                     return COMPACT_PARTIAL;
>
> The above AFAICS needs #ifdef CMA otherwise won't compile without CMA.
>
>> +
>> +             /*
>> +              * Job done if allocation would steal freepages from
>> +              * other migratetype buddy lists.
>> +              */
>> +             if (find_suitable_fallback(area, order, migratetype,
>> +                                             true, &can_steal) != -1)
>>                       return COMPACT_PARTIAL;
>>       }
>>
>> diff --git a/mm/internal.h b/mm/internal.h
>> index c4d6c9b..9640650 100644
>> --- a/mm/internal.h
>> +++ b/mm/internal.h
>> @@ -200,6 +200,8 @@ isolate_freepages_range(struct compact_control *cc,
>>  unsigned long
>>  isolate_migratepages_range(struct compact_control *cc,
>>                          unsigned long low_pfn, unsigned long end_pfn);
>> +int find_suitable_fallback(struct free_area *area, unsigned int order,
>> +                     int migratetype, bool only_stealable, bool *can_steal);
>>
>>  #endif
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 6cb18f8..0a150f1 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1177,8 +1177,8 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>>               set_pageblock_migratetype(page, start_type);
>>  }
>>
>> -static int find_suitable_fallback(struct free_area *area, unsigned int order,
>> -                                     int migratetype, bool *can_steal)
>> +int find_suitable_fallback(struct free_area *area, unsigned int order,
>> +                     int migratetype, bool only_stealable, bool *can_steal)
>>  {
>>       int i;
>>       int fallback_mt;
>> @@ -1198,7 +1198,11 @@ static int find_suitable_fallback(struct free_area *area, unsigned int order,
>>               if (can_steal_fallback(order, migratetype))
>>                       *can_steal = true;
>>
>> -             return i;
>> +             if (!only_stealable)
>> +                     return i;
>> +
>> +             if (*can_steal)
>> +                     return i;
>
> So I've realized that this problaby won't always work as intended :/ Because we
> still differ from what page allocator does.
> Consider we compact for UNMOVABLE allocation. First we try RECLAIMABLE fallback.
> Turns out we could fallback, but not steal, hence we skip it due to
> only_stealable == true. So we try MOVABLE, and turns out we can steal, so we
> finish compaction.
> Then the allocation attempt follows, and it will fallback to RECLAIMABLE,
> without extra stealing. The compaction decision for MOVABLE was moot.
> Is it a big problem? Probably not, the compaction will still perform some extra
> anti-fragmentation on average, but we should consider it.

Hello,

First of all, thanks for quick review. :)

Hmm... I don't get it. Is this case possible in current implementation?
can_steal_fallback() decides whether steal is possible or not, based
on freepage order
and start_migratetype. If fallback freepage is on RECLAIMABLE and
MOVABLE type and
they are same order, can_steal could be true for both or false for
neither. If order is
different, compaction decision would be recognized by
__rmqueue_fallback() since it
try to find freepage from high order to low order.

> I've got another idea for small improvement. We should only test for fallbacks
> when migration scanner has scanned (and migrated) a whole pageblock. Should be a
> simple alignment test of cc->migrate_pfn.
> Advantages:
> - potentially less checking overhead
> - chances of stealing increase if we created more free pages for migration
> - thus less fragmentation
> The cost is a bit more time spent compacting, but it's bounded and worth it
> (especially the less fragmentation) IMHO.

It looks a good idea, but, I'm not sure it is worth doing.

One of intention of this patch is to reduce needless compaction effort.
If steal decision logic is well designed to avoid fragmentation,
finishing compaction
at that time would not cause any fragmentation problem so we don't need to
compact more until page aligned pfn. IMHO, It'd be better to focus on improving
steal decision to avoid fragmentation. Anyway, I will run stress-highalloc
benchmark without reboot and investigate long-term fragmentation effect of
this patch and your idea.

Thanks.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v3 2/3] mm/page_alloc: factor out fallback freepage checking
  2015-02-02  9:59   ` Vlastimil Babka
@ 2015-02-02 13:26     ` Joonsoo Kim
  0 siblings, 0 replies; 15+ messages in thread
From: Joonsoo Kim @ 2015-02-02 13:26 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Andrew Morton, Mel Gorman, David Rientjes, Rik van Riel,
	Linux Memory Management List, LKML, Zhang Yanfei, Joonsoo Kim

2015-02-02 18:59 GMT+09:00 Vlastimil Babka <vbabka@suse.cz>:
> On 02/02/2015 08:15 AM, Joonsoo Kim wrote:
>> This is preparation step to use page allocator's anti fragmentation logic
>> in compaction. This patch just separates fallback freepage checking part
>> from fallback freepage management part. Therefore, there is no functional
>> change.
>>
>> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>> ---
>>  mm/page_alloc.c | 128 +++++++++++++++++++++++++++++++++-----------------------
>>  1 file changed, 76 insertions(+), 52 deletions(-)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index e64b260..6cb18f8 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1142,14 +1142,26 @@ static void change_pageblock_range(struct page *pageblock_page,
>>   * as fragmentation caused by those allocations polluting movable pageblocks
>>   * is worse than movable allocations stealing from unmovable and reclaimable
>>   * pageblocks.
>> - *
>> - * If we claim more than half of the pageblock, change pageblock's migratetype
>> - * as well.
>>   */
>> -static void try_to_steal_freepages(struct zone *zone, struct page *page,
>> -                               int start_type, int fallback_type)
>> +static bool can_steal_fallback(unsigned int order, int start_mt)
>> +{
>> +     if (order >= pageblock_order)
>> +             return true;
>> +
>> +     if (order >= pageblock_order / 2 ||
>> +             start_mt == MIGRATE_RECLAIMABLE ||
>> +             start_mt == MIGRATE_UNMOVABLE ||
>> +             page_group_by_mobility_disabled)
>> +             return true;
>> +
>> +     return false;
>> +}
>> +
>> +static void steal_suitable_fallback(struct zone *zone, struct page *page,
>> +                                                       int start_type)
>
> Some comment about the function please?

Okay.

>>  {
>>       int current_order = page_order(page);
>> +     int pages;
>>
>>       /* Take ownership for orders >= pageblock_order */
>>       if (current_order >= pageblock_order) {
>> @@ -1157,19 +1169,39 @@ static void try_to_steal_freepages(struct zone *zone, struct page *page,
>>               return;
>>       }
>>
>> -     if (current_order >= pageblock_order / 2 ||
>> -         start_type == MIGRATE_RECLAIMABLE ||
>> -         start_type == MIGRATE_UNMOVABLE ||
>> -         page_group_by_mobility_disabled) {
>> -             int pages;
>> +     pages = move_freepages_block(zone, page, start_type);
>>
>> -             pages = move_freepages_block(zone, page, start_type);
>> +     /* Claim the whole block if over half of it is free */
>> +     if (pages >= (1 << (pageblock_order-1)) ||
>> +                     page_group_by_mobility_disabled)
>> +             set_pageblock_migratetype(page, start_type);
>> +}
>>
>> -             /* Claim the whole block if over half of it is free */
>> -             if (pages >= (1 << (pageblock_order-1)) ||
>> -                             page_group_by_mobility_disabled)
>> -                     set_pageblock_migratetype(page, start_type);
>> +static int find_suitable_fallback(struct free_area *area, unsigned int order,
>> +                                     int migratetype, bool *can_steal)
>
> Same here.

Okay.

>> +{
>> +     int i;
>> +     int fallback_mt;
>> +
>> +     if (area->nr_free == 0)
>> +             return -1;
>> +
>> +     *can_steal = false;
>> +     for (i = 0;; i++) {
>> +             fallback_mt = fallbacks[migratetype][i];
>> +             if (fallback_mt == MIGRATE_RESERVE)
>> +                     break;
>> +
>> +             if (list_empty(&area->free_list[fallback_mt]))
>> +                     continue;
>> +
>> +             if (can_steal_fallback(order, migratetype))
>> +                     *can_steal = true;
>> +
>> +             return i;
>
> You want to return fallback_mt, not 'i', no?

Yes. I will fix it.

Thanks.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v3 2/3] mm/page_alloc: factor out fallback freepage checking
  2015-02-02 12:56   ` Zhang Yanfei
@ 2015-02-02 13:29     ` Joonsoo Kim
  0 siblings, 0 replies; 15+ messages in thread
From: Joonsoo Kim @ 2015-02-02 13:29 UTC (permalink / raw)
  To: Zhang Yanfei
  Cc: Andrew Morton, Mel Gorman, David Rientjes, Rik van Riel,
	Linux Memory Management List, LKML, Zhang Yanfei, Joonsoo Kim

2015-02-02 21:56 GMT+09:00 Zhang Yanfei <zhangyanfei.ok@hotmail.com>:
> Hello Joonsoo,
>
> At 2015/2/2 15:15, Joonsoo Kim wrote:
>> This is preparation step to use page allocator's anti fragmentation logic
>> in compaction. This patch just separates fallback freepage checking part
>> from fallback freepage management part. Therefore, there is no functional
>> change.
>>
>> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>> ---
>>  mm/page_alloc.c | 128 +++++++++++++++++++++++++++++++++-----------------------
>>  1 file changed, 76 insertions(+), 52 deletions(-)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index e64b260..6cb18f8 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1142,14 +1142,26 @@ static void change_pageblock_range(struct page *pageblock_page,
>>   * as fragmentation caused by those allocations polluting movable pageblocks
>>   * is worse than movable allocations stealing from unmovable and reclaimable
>>   * pageblocks.
>> - *
>> - * If we claim more than half of the pageblock, change pageblock's migratetype
>> - * as well.
>>   */
>> -static void try_to_steal_freepages(struct zone *zone, struct page *page,
>> -                               int start_type, int fallback_type)
>> +static bool can_steal_fallback(unsigned int order, int start_mt)
>> +{
>> +     if (order >= pageblock_order)
>> +             return true;
>
> Is this test necessary? Since an order which is >= pageblock_order
> will always pass the order >= pageblock_order / 2 test below.
>

Yes, that's true. But, I'd like to remain code as is, because
condition "order >= pageblock_order / 2" is really heuristic and could
be changed someday. Instead of removing it, I will add some comment on it.

Thanks.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v3 3/3] mm/compaction: enhance compaction finish condition
  2015-02-02 10:20   ` Vlastimil Babka
  2015-02-02 13:23     ` Joonsoo Kim
@ 2015-02-02 13:51     ` Zhang Yanfei
  2015-02-02 14:19       ` Vlastimil Babka
  2015-02-03  6:55       ` Joonsoo Kim
  1 sibling, 2 replies; 15+ messages in thread
From: Zhang Yanfei @ 2015-02-02 13:51 UTC (permalink / raw)
  To: Vlastimil Babka, Joonsoo Kim, Andrew Morton
  Cc: Mel Gorman, David Rientjes, Rik van Riel, linux-mm, linux-kernel,
	Zhang Yanfei, Joonsoo Kim

Hello,

At 2015/2/2 18:20, Vlastimil Babka wrote:
> On 02/02/2015 08:15 AM, Joonsoo Kim wrote:
>> Compaction has anti fragmentation algorithm. It is that freepage
>> should be more than pageblock order to finish the compaction if we don't
>> find any freepage in requested migratetype buddy list. This is for
>> mitigating fragmentation, but, there is a lack of migratetype
>> consideration and it is too excessive compared to page allocator's anti
>> fragmentation algorithm.
>>
>> Not considering migratetype would cause premature finish of compaction.
>> For example, if allocation request is for unmovable migratetype,
>> freepage with CMA migratetype doesn't help that allocation and
>> compaction should not be stopped. But, current logic regards this
>> situation as compaction is no longer needed, so finish the compaction.
> 
> This is only for order >= pageblock_order, right? Perhaps should be told explicitly.

I might be wrong. If we applied patch1, so after the system runs for some time,
there must be no MIGRATE_CMA free pages in the system, right? If so, the
example above doesn't exist anymore.

> 
>> Secondly, condition is too excessive compared to page allocator's logic.
>> We can steal freepage from other migratetype and change pageblock
>> migratetype on more relaxed conditions in page allocator. This is designed
>> to prevent fragmentation and we can use it here. Imposing hard constraint
>> only to the compaction doesn't help much in this case since page allocator
>> would cause fragmentation again.
>>
>> To solve these problems, this patch borrows anti fragmentation logic from
>> page allocator. It will reduce premature compaction finish in some cases
>> and reduce excessive compaction work.
>>
>> stress-highalloc test in mmtests with non movable order 7 allocation shows
>> considerable increase of compaction success rate.
>>
>> Compaction success rate (Compaction success * 100 / Compaction stalls, %)
>> 31.82 : 42.20
>>
>> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>> ---
>>  mm/compaction.c | 14 ++++++++++++--
>>  mm/internal.h   |  2 ++
>>  mm/page_alloc.c | 12 ++++++++----
>>  3 files changed, 22 insertions(+), 6 deletions(-)
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index 782772d..d40c426 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -1170,13 +1170,23 @@ static int __compact_finished(struct zone *zone, struct compact_control *cc,
>>  	/* Direct compactor: Is a suitable page free? */
>>  	for (order = cc->order; order < MAX_ORDER; order++) {
>>  		struct free_area *area = &zone->free_area[order];
>> +		bool can_steal;
>>  
>>  		/* Job done if page is free of the right migratetype */
>>  		if (!list_empty(&area->free_list[migratetype]))
>>  			return COMPACT_PARTIAL;
>>  
>> -		/* Job done if allocation would set block type */
>> -		if (order >= pageblock_order && area->nr_free)
>> +		/* MIGRATE_MOVABLE can fallback on MIGRATE_CMA */
>> +		if (migratetype == MIGRATE_MOVABLE &&
>> +			!list_empty(&area->free_list[MIGRATE_CMA]))
>> +			return COMPACT_PARTIAL;
> 
> The above AFAICS needs #ifdef CMA otherwise won't compile without CMA.
> 
>> +
>> +		/*
>> +		 * Job done if allocation would steal freepages from
>> +		 * other migratetype buddy lists.
>> +		 */
>> +		if (find_suitable_fallback(area, order, migratetype,
>> +						true, &can_steal) != -1)
>>  			return COMPACT_PARTIAL;
>>  	}
>>  
>> diff --git a/mm/internal.h b/mm/internal.h
>> index c4d6c9b..9640650 100644
>> --- a/mm/internal.h
>> +++ b/mm/internal.h
>> @@ -200,6 +200,8 @@ isolate_freepages_range(struct compact_control *cc,
>>  unsigned long
>>  isolate_migratepages_range(struct compact_control *cc,
>>  			   unsigned long low_pfn, unsigned long end_pfn);
>> +int find_suitable_fallback(struct free_area *area, unsigned int order,
>> +			int migratetype, bool only_stealable, bool *can_steal);
>>  
>>  #endif
>>  
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 6cb18f8..0a150f1 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1177,8 +1177,8 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>>  		set_pageblock_migratetype(page, start_type);
>>  }
>>  
>> -static int find_suitable_fallback(struct free_area *area, unsigned int order,
>> -					int migratetype, bool *can_steal)
>> +int find_suitable_fallback(struct free_area *area, unsigned int order,
>> +			int migratetype, bool only_stealable, bool *can_steal)
>>  {
>>  	int i;
>>  	int fallback_mt;
>> @@ -1198,7 +1198,11 @@ static int find_suitable_fallback(struct free_area *area, unsigned int order,
>>  		if (can_steal_fallback(order, migratetype))
>>  			*can_steal = true;
>>  
>> -		return i;
>> +		if (!only_stealable)
>> +			return i;
>> +
>> +		if (*can_steal)
>> +			return i;
> 
> So I've realized that this problaby won't always work as intended :/ Because we
> still differ from what page allocator does.
> Consider we compact for UNMOVABLE allocation. First we try RECLAIMABLE fallback.
> Turns out we could fallback, but not steal, hence we skip it due to
> only_stealable == true. So we try MOVABLE, and turns out we can steal, so we
> finish compaction.
> Then the allocation attempt follows, and it will fallback to RECLAIMABLE,
> without extra stealing. The compaction decision for MOVABLE was moot.
> Is it a big problem? Probably not, the compaction will still perform some extra
> anti-fragmentation on average, but we should consider it.
> 
> I've got another idea for small improvement. We should only test for fallbacks
> when migration scanner has scanned (and migrated) a whole pageblock. Should be a
> simple alignment test of cc->migrate_pfn.
> Advantages:
> - potentially less checking overhead
> - chances of stealing increase if we created more free pages for migration
> - thus less fragmentation
> The cost is a bit more time spent compacting, but it's bounded and worth it
> (especially the less fragmentation) IMHO.

This seems to make the compaction a little compicated... I kind of
don't know why there is more anti-fragmentation by using this approach.

Thanks.

> 
>>  	}
>>  
>>  	return -1;
>> @@ -1220,7 +1224,7 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
>>  				--current_order) {
>>  		area = &(zone->free_area[current_order]);
>>  		fallback_mt = find_suitable_fallback(area, current_order,
>> -				start_migratetype, &can_steal);
>> +				start_migratetype, false, &can_steal);
>>  		if (fallback_mt == -1)
>>  			continue;
>>  
>>
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v3 3/3] mm/compaction: enhance compaction finish condition
  2015-02-02 13:23     ` Joonsoo Kim
@ 2015-02-02 14:07       ` Vlastimil Babka
  0 siblings, 0 replies; 15+ messages in thread
From: Vlastimil Babka @ 2015-02-02 14:07 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Andrew Morton, Mel Gorman, David Rientjes, Rik van Riel,
	Linux Memory Management List, LKML, Zhang Yanfei, Joonsoo Kim

On 02/02/2015 02:23 PM, Joonsoo Kim wrote:
> 2015-02-02 19:20 GMT+09:00 Vlastimil Babka <vbabka@suse.cz>:
>> On 02/02/2015 08:15 AM, Joonsoo Kim wrote:
>>
>> So I've realized that this problaby won't always work as intended :/ Because we
>> still differ from what page allocator does.
>> Consider we compact for UNMOVABLE allocation. First we try RECLAIMABLE fallback.
>> Turns out we could fallback, but not steal, hence we skip it due to
>> only_stealable == true. So we try MOVABLE, and turns out we can steal, so we
>> finish compaction.
>> Then the allocation attempt follows, and it will fallback to RECLAIMABLE,
>> without extra stealing. The compaction decision for MOVABLE was moot.
>> Is it a big problem? Probably not, the compaction will still perform some extra
>> anti-fragmentation on average, but we should consider it.
> 
> Hello,
> 
> First of all, thanks for quick review. :)
> 
> Hmm... I don't get it. Is this case possible in current implementation?
> can_steal_fallback() decides whether steal is possible or not, based
> on freepage order
> and start_migratetype. If fallback freepage is on RECLAIMABLE and
> MOVABLE type and
> they are same order, can_steal could be true for both or false for
> neither. If order is
> different, compaction decision would be recognized by
> __rmqueue_fallback() since it
> try to find freepage from high order to low order.

Ah, right, I got confused into thinking that the result of can_steal depends on
how many freepages it found within the pageblock to steal. Sorry about the noise.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v3 3/3] mm/compaction: enhance compaction finish condition
  2015-02-02 13:51     ` Zhang Yanfei
@ 2015-02-02 14:19       ` Vlastimil Babka
  2015-02-03  6:55       ` Joonsoo Kim
  1 sibling, 0 replies; 15+ messages in thread
From: Vlastimil Babka @ 2015-02-02 14:19 UTC (permalink / raw)
  To: Zhang Yanfei, Joonsoo Kim, Andrew Morton
  Cc: Mel Gorman, David Rientjes, Rik van Riel, linux-mm, linux-kernel,
	Zhang Yanfei, Joonsoo Kim

On 02/02/2015 02:51 PM, Zhang Yanfei wrote:
>> I've got another idea for small improvement. We should only test for fallbacks
>> when migration scanner has scanned (and migrated) a whole pageblock. Should be a
>> simple alignment test of cc->migrate_pfn.
>> Advantages:
>> - potentially less checking overhead
>> - chances of stealing increase if we created more free pages for migration
>> - thus less fragmentation
>> The cost is a bit more time spent compacting, but it's bounded and worth it
>> (especially the less fragmentation) IMHO.
> 
> This seems to make the compaction a little compicated... I kind of

Just a little bit, compared to e.g. this patch :)

> don't know why there is more anti-fragmentation by using this approach.

Ah so let me explain. Assume we want to allocate order-3 unmovable page and have
to invoke compaction because of that. The migration scanner starts in the
beginning of a movable pageblock, isolates and migrates 32 pages
(COMPACT_CLUSTER_MAX) from the beginning sucessfully, and then we check in
compact_finished and see that the allocation could fall back to this newly
created order-5 block. So we terminate compaction and allocation takes this
order-5 block, allocates order-3 and the rest remains on free list of unmovable.
It will try to steal more freepages from the pageblock, but there aren't any. So
we have a movable block with unmovable pages. The spare free pages are
eventually depleted and we fallback to another movable pageblock... bad.

With the proposed change, we would try to compact the pageblock fully. Possibly
we would free at least half of it, so it would be marked as unmovable during the
fallback allocation, and would be able to satisfy more unmovable allocations
that wouldn't have to fallback somewhere else.

I don't think that finishing the scan of a single pageblock is that big of a
cost here.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC PATCH v3 3/3] mm/compaction: enhance compaction finish condition
  2015-02-02 13:51     ` Zhang Yanfei
  2015-02-02 14:19       ` Vlastimil Babka
@ 2015-02-03  6:55       ` Joonsoo Kim
  1 sibling, 0 replies; 15+ messages in thread
From: Joonsoo Kim @ 2015-02-03  6:55 UTC (permalink / raw)
  To: Zhang Yanfei
  Cc: Vlastimil Babka, Andrew Morton, Mel Gorman, David Rientjes,
	Rik van Riel, linux-mm, linux-kernel, Zhang Yanfei

On Mon, Feb 02, 2015 at 09:51:01PM +0800, Zhang Yanfei wrote:
> Hello,
> 
> At 2015/2/2 18:20, Vlastimil Babka wrote:
> > On 02/02/2015 08:15 AM, Joonsoo Kim wrote:
> >> Compaction has anti fragmentation algorithm. It is that freepage
> >> should be more than pageblock order to finish the compaction if we don't
> >> find any freepage in requested migratetype buddy list. This is for
> >> mitigating fragmentation, but, there is a lack of migratetype
> >> consideration and it is too excessive compared to page allocator's anti
> >> fragmentation algorithm.
> >>
> >> Not considering migratetype would cause premature finish of compaction.
> >> For example, if allocation request is for unmovable migratetype,
> >> freepage with CMA migratetype doesn't help that allocation and
> >> compaction should not be stopped. But, current logic regards this
> >> situation as compaction is no longer needed, so finish the compaction.
> > 
> > This is only for order >= pageblock_order, right? Perhaps should be told explicitly.
> 
> I might be wrong. If we applied patch1, so after the system runs for some time,
> there must be no MIGRATE_CMA free pages in the system, right? If so, the
> example above doesn't exist anymore.

Hello,

Compaction could migrate all pages on MIGRATE_CMA pageblock, and,
in this case, order >= pageblock_order could be true. And, cma freepages
are used only for fallback so even if applying patch1, it could be possible.

Thanks.

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2015-02-03  6:53 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-02  7:15 [RFC PATCH v3 1/3] mm/cma: change fallback behaviour for CMA freepage Joonsoo Kim
2015-02-02  7:15 ` [RFC PATCH v3 2/3] mm/page_alloc: factor out fallback freepage checking Joonsoo Kim
2015-02-02  9:59   ` Vlastimil Babka
2015-02-02 13:26     ` Joonsoo Kim
2015-02-02 12:56   ` Zhang Yanfei
2015-02-02 13:29     ` Joonsoo Kim
2015-02-02  7:15 ` [RFC PATCH v3 3/3] mm/compaction: enhance compaction finish condition Joonsoo Kim
2015-02-02 10:20   ` Vlastimil Babka
2015-02-02 13:23     ` Joonsoo Kim
2015-02-02 14:07       ` Vlastimil Babka
2015-02-02 13:51     ` Zhang Yanfei
2015-02-02 14:19       ` Vlastimil Babka
2015-02-03  6:55       ` Joonsoo Kim
2015-02-02  7:29 ` [RFC PATCH v3 1/3] mm/cma: change fallback behaviour for CMA freepage Joonsoo Kim
2015-02-02  8:27 ` Vlastimil Babka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).