linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 for v3.18] mm/compaction: skip the range until proper target pageblock is met
@ 2014-11-04  2:37 Joonsoo Kim
  2014-11-04  8:46 ` Vlastimil Babka
  0 siblings, 1 reply; 2+ messages in thread
From: Joonsoo Kim @ 2014-11-04  2:37 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vlastimil Babka, David Rientjes, linux-mm, linux-kernel,
	Minchan Kim, Michal Nazarewicz, Naoya Horiguchi,
	Christoph Lameter, Rik van Riel, Mel Gorman, Zhang Yanfei,
	Joonsoo Kim

commit 7d49d8868336 ("mm, compaction: reduce zone checking frequency in
the migration scanner") makes side-effect that change iteration
range calculation. Before change, block_end_pfn is calculated using
start_pfn, but, now, blindly add pageblock_nr_pages to previous value.

This cause the problem that isolation_start_pfn is larger than
block_end_pfn when we isolate the page with more than pageblock order.
In this case, isolation would be failed due to invalid range parameter.

To prevent this, this patch recalculate the range to find valid target
pageblock. Without this patch, CMA with more than pageblock order always
fail, but, with this patch, it will succeed.

Changes from v1:
recalculate the range rather than just skipping to find valid one.
add code comment.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 mm/compaction.c |   10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/mm/compaction.c b/mm/compaction.c
index ec74cf0..4f0151c 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -479,6 +479,16 @@ isolate_freepages_range(struct compact_control *cc,
 
 		block_end_pfn = min(block_end_pfn, end_pfn);
 
+		/*
+		 * pfn could pass the block_end_pfn if isolated freepage
+		 * is more than pageblock order. In this case, we adjust
+		 * scanning range to right one.
+		 */
+		if (pfn >= block_end_pfn) {
+			block_end_pfn = ALIGN(pfn + 1, pageblock_nr_pages);
+			block_end_pfn = min(block_end_pfn, end_pfn);
+		}
+
 		if (!pageblock_pfn_to_page(pfn, block_end_pfn, cc->zone))
 			break;
 
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH v2 for v3.18] mm/compaction: skip the range until proper target pageblock is met
  2014-11-04  2:37 [PATCH v2 for v3.18] mm/compaction: skip the range until proper target pageblock is met Joonsoo Kim
@ 2014-11-04  8:46 ` Vlastimil Babka
  0 siblings, 0 replies; 2+ messages in thread
From: Vlastimil Babka @ 2014-11-04  8:46 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton
  Cc: David Rientjes, linux-mm, linux-kernel, Minchan Kim,
	Michal Nazarewicz, Naoya Horiguchi, Christoph Lameter,
	Rik van Riel, Mel Gorman, Zhang Yanfei

On 11/04/2014 03:37 AM, Joonsoo Kim wrote:
> commit 7d49d8868336 ("mm, compaction: reduce zone checking frequency in
> the migration scanner") makes side-effect that change iteration
> range calculation. Before change, block_end_pfn is calculated using
> start_pfn, but, now, blindly add pageblock_nr_pages to previous value.
>
> This cause the problem that isolation_start_pfn is larger than
> block_end_pfn when we isolate the page with more than pageblock order.
> In this case, isolation would be failed due to invalid range parameter.
>
> To prevent this, this patch recalculate the range to find valid target
> pageblock. Without this patch, CMA with more than pageblock order always
> fail, but, with this patch, it will succeed.
>
> Changes from v1:
> recalculate the range rather than just skipping to find valid one.
> add code comment.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

(nitpick below)

> ---
>   mm/compaction.c |   10 ++++++++++
>   1 file changed, 10 insertions(+)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index ec74cf0..4f0151c 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -479,6 +479,16 @@ isolate_freepages_range(struct compact_control *cc,
>
>   		block_end_pfn = min(block_end_pfn, end_pfn);
>
> +		/*
> +		 * pfn could pass the block_end_pfn if isolated freepage
> +		 * is more than pageblock order. In this case, we adjust
> +		 * scanning range to right one.
> +		 */
> +		if (pfn >= block_end_pfn) {
> +			block_end_pfn = ALIGN(pfn + 1, pageblock_nr_pages);
> +			block_end_pfn = min(block_end_pfn, end_pfn);
> +		}

If you moved this up, there could be just one min(block_end_pfn, 
end_pfn) instance in the code. If the first min() makes block_end_pfn == 
end_pfn and pfn >= block_end_pfn, then pfn >= end_pfn and the loop would 
be terminated already (assuming this was why you left the first min() 
before the new check). But I don't mind if you leave it like this.

> +
>   		if (!pageblock_pfn_to_page(pfn, block_end_pfn, cc->zone))
>   			break;
>
>


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2014-11-04  8:46 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-11-04  2:37 [PATCH v2 for v3.18] mm/compaction: skip the range until proper target pageblock is met Joonsoo Kim
2014-11-04  8:46 ` Vlastimil Babka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).