linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/3] mm/page_isolation: return last tested pfn rather than failure indicator
@ 2015-11-13  2:23 Joonsoo Kim
  2015-11-13  2:23 ` [PATCH 2/3] mm/page_isolation: add new tracepoint, test_pages_isolated Joonsoo Kim
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Joonsoo Kim @ 2015-11-13  2:23 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Nazarewicz, Minchan Kim, David Rientjes, linux-mm,
	linux-kernel, Joonsoo Kim

This is preparation step to report test failed pfn in new tracepoint
to analyze cma allocation failure problem. There is no functional change
in this patch.

Acked-by: David Rientjes <rientjes@google.com>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 mm/page_isolation.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 4568fd5..029a171 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -212,7 +212,7 @@ int undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
  *
  * Returns 1 if all pages in the range are isolated.
  */
-static int
+static unsigned long
 __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn,
 				  bool skip_hwpoisoned_pages)
 {
@@ -237,9 +237,8 @@ __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn,
 		else
 			break;
 	}
-	if (pfn < end_pfn)
-		return 0;
-	return 1;
+
+	return pfn;
 }
 
 int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
@@ -248,7 +247,6 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
 	unsigned long pfn, flags;
 	struct page *page;
 	struct zone *zone;
-	int ret;
 
 	/*
 	 * Note: pageblock_nr_pages != MAX_ORDER. Then, chunks of free pages
@@ -266,10 +264,11 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
 	/* Check all pages are free or marked as ISOLATED */
 	zone = page_zone(page);
 	spin_lock_irqsave(&zone->lock, flags);
-	ret = __test_page_isolated_in_pageblock(start_pfn, end_pfn,
+	pfn = __test_page_isolated_in_pageblock(start_pfn, end_pfn,
 						skip_hwpoisoned_pages);
 	spin_unlock_irqrestore(&zone->lock, flags);
-	return ret ? 0 : -EBUSY;
+
+	return pfn < end_pfn ? -EBUSY : 0;
 }
 
 struct page *alloc_migrate_target(struct page *page, unsigned long private,
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2015-11-25 10:46 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-13  2:23 [PATCH 1/3] mm/page_isolation: return last tested pfn rather than failure indicator Joonsoo Kim
2015-11-13  2:23 ` [PATCH 2/3] mm/page_isolation: add new tracepoint, test_pages_isolated Joonsoo Kim
2015-11-13 22:51   ` David Rientjes
2015-11-19 23:34   ` Andrew Morton
2015-11-20  6:21     ` Joonsoo Kim
2015-11-24 14:57   ` Vlastimil Babka
2015-11-13  2:23 ` [PATCH 3/3] mm/cma: always check which page cause allocation failure Joonsoo Kim
2015-11-24 15:27   ` Vlastimil Babka
2015-11-25  2:39     ` Joonsoo Kim
2015-11-25  5:32       ` [PATCH v2] " Joonsoo Kim
2015-11-25 10:45         ` Vlastimil Babka
2015-11-24 14:57 ` [PATCH 1/3] mm/page_isolation: return last tested pfn rather than failure indicator Vlastimil Babka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).