All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/3] mm/page_isolation: return last tested pfn rather than failure indicator
@ 2015-11-09  7:24 ` Joonsoo Kim
  0 siblings, 0 replies; 20+ messages in thread
From: Joonsoo Kim @ 2015-11-09  7:24 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Nazarewicz, Minchan Kim, linux-mm, linux-kernel, Joonsoo Kim

This is preparation step to report test failed pfn in new tracepoint
to analyze cma allocation failure problem. There is no functional change
in this patch.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 mm/page_isolation.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 4568fd5..6f5ae96 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -212,7 +212,7 @@ int undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
  *
  * Returns 1 if all pages in the range are isolated.
  */
-static int
+static unsigned long
 __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn,
 				  bool skip_hwpoisoned_pages)
 {
@@ -237,9 +237,8 @@ __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn,
 		else
 			break;
 	}
-	if (pfn < end_pfn)
-		return 0;
-	return 1;
+
+	return pfn;
 }
 
 int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
@@ -248,7 +247,6 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
 	unsigned long pfn, flags;
 	struct page *page;
 	struct zone *zone;
-	int ret;
 
 	/*
 	 * Note: pageblock_nr_pages != MAX_ORDER. Then, chunks of free pages
@@ -266,10 +264,11 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
 	/* Check all pages are free or marked as ISOLATED */
 	zone = page_zone(page);
 	spin_lock_irqsave(&zone->lock, flags);
-	ret = __test_page_isolated_in_pageblock(start_pfn, end_pfn,
+	pfn = __test_page_isolated_in_pageblock(start_pfn, end_pfn,
 						skip_hwpoisoned_pages);
 	spin_unlock_irqrestore(&zone->lock, flags);
-	return ret ? 0 : -EBUSY;
+
+	return (pfn < end_pfn) ? -EBUSY : 0;
 }
 
 struct page *alloc_migrate_target(struct page *page, unsigned long private,
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 1/3] mm/page_isolation: return last tested pfn rather than failure indicator
@ 2015-11-09  7:24 ` Joonsoo Kim
  0 siblings, 0 replies; 20+ messages in thread
From: Joonsoo Kim @ 2015-11-09  7:24 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Nazarewicz, Minchan Kim, linux-mm, linux-kernel, Joonsoo Kim

This is preparation step to report test failed pfn in new tracepoint
to analyze cma allocation failure problem. There is no functional change
in this patch.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 mm/page_isolation.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 4568fd5..6f5ae96 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -212,7 +212,7 @@ int undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
  *
  * Returns 1 if all pages in the range are isolated.
  */
-static int
+static unsigned long
 __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn,
 				  bool skip_hwpoisoned_pages)
 {
@@ -237,9 +237,8 @@ __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn,
 		else
 			break;
 	}
-	if (pfn < end_pfn)
-		return 0;
-	return 1;
+
+	return pfn;
 }
 
 int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
@@ -248,7 +247,6 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
 	unsigned long pfn, flags;
 	struct page *page;
 	struct zone *zone;
-	int ret;
 
 	/*
 	 * Note: pageblock_nr_pages != MAX_ORDER. Then, chunks of free pages
@@ -266,10 +264,11 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
 	/* Check all pages are free or marked as ISOLATED */
 	zone = page_zone(page);
 	spin_lock_irqsave(&zone->lock, flags);
-	ret = __test_page_isolated_in_pageblock(start_pfn, end_pfn,
+	pfn = __test_page_isolated_in_pageblock(start_pfn, end_pfn,
 						skip_hwpoisoned_pages);
 	spin_unlock_irqrestore(&zone->lock, flags);
-	return ret ? 0 : -EBUSY;
+
+	return (pfn < end_pfn) ? -EBUSY : 0;
 }
 
 struct page *alloc_migrate_target(struct page *page, unsigned long private,
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/3] mm/cma: add new tracepoint, test_pages_isolated
  2015-11-09  7:24 ` Joonsoo Kim
@ 2015-11-09  7:24   ` Joonsoo Kim
  -1 siblings, 0 replies; 20+ messages in thread
From: Joonsoo Kim @ 2015-11-09  7:24 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Nazarewicz, Minchan Kim, linux-mm, linux-kernel, Joonsoo Kim

cma allocation should be guranteeded to succeed, but, sometimes,
it could be failed in current implementation. To track down
the problem, we need to know which page is problematic and
this new tracepoint will report it.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 include/trace/events/cma.h | 26 ++++++++++++++++++++++++++
 mm/page_isolation.c        |  5 +++++
 2 files changed, 31 insertions(+)

diff --git a/include/trace/events/cma.h b/include/trace/events/cma.h
index d7cd961..82281b0 100644
--- a/include/trace/events/cma.h
+++ b/include/trace/events/cma.h
@@ -60,6 +60,32 @@ TRACE_EVENT(cma_release,
 		  __entry->count)
 );
 
+TRACE_EVENT(test_pages_isolated,
+
+	TP_PROTO(
+		unsigned long start_pfn,
+		unsigned long end_pfn,
+		unsigned long fin_pfn),
+
+	TP_ARGS(start_pfn, end_pfn, fin_pfn),
+
+	TP_STRUCT__entry(
+		__field(unsigned long, start_pfn)
+		__field(unsigned long, end_pfn)
+		__field(unsigned long, fin_pfn)
+	),
+
+	TP_fast_assign(
+		__entry->start_pfn = start_pfn;
+		__entry->end_pfn = end_pfn;
+		__entry->fin_pfn = fin_pfn;
+	),
+
+	TP_printk("start_pfn=0x%lx end_pfn=0x%lx fin_pfn=0x%lx ret=%s",
+		__entry->start_pfn, __entry->end_pfn, __entry->fin_pfn,
+		__entry->end_pfn == __entry->fin_pfn ? "success" : "fail")
+);
+
 #endif /* _TRACE_CMA_H */
 
 /* This part must be outside protection */
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 6f5ae96..bda0fea 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -7,6 +7,8 @@
 #include <linux/pageblock-flags.h>
 #include <linux/memory.h>
 #include <linux/hugetlb.h>
+#include <trace/events/cma.h>
+
 #include "internal.h"
 
 static int set_migratetype_isolate(struct page *page,
@@ -268,6 +270,9 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
 						skip_hwpoisoned_pages);
 	spin_unlock_irqrestore(&zone->lock, flags);
 
+#ifdef CONFIG_CMA
+	trace_test_pages_isolated(start_pfn, end_pfn, pfn);
+#endif
 	return (pfn < end_pfn) ? -EBUSY : 0;
 }
 
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/3] mm/cma: add new tracepoint, test_pages_isolated
@ 2015-11-09  7:24   ` Joonsoo Kim
  0 siblings, 0 replies; 20+ messages in thread
From: Joonsoo Kim @ 2015-11-09  7:24 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Nazarewicz, Minchan Kim, linux-mm, linux-kernel, Joonsoo Kim

cma allocation should be guranteeded to succeed, but, sometimes,
it could be failed in current implementation. To track down
the problem, we need to know which page is problematic and
this new tracepoint will report it.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 include/trace/events/cma.h | 26 ++++++++++++++++++++++++++
 mm/page_isolation.c        |  5 +++++
 2 files changed, 31 insertions(+)

diff --git a/include/trace/events/cma.h b/include/trace/events/cma.h
index d7cd961..82281b0 100644
--- a/include/trace/events/cma.h
+++ b/include/trace/events/cma.h
@@ -60,6 +60,32 @@ TRACE_EVENT(cma_release,
 		  __entry->count)
 );
 
+TRACE_EVENT(test_pages_isolated,
+
+	TP_PROTO(
+		unsigned long start_pfn,
+		unsigned long end_pfn,
+		unsigned long fin_pfn),
+
+	TP_ARGS(start_pfn, end_pfn, fin_pfn),
+
+	TP_STRUCT__entry(
+		__field(unsigned long, start_pfn)
+		__field(unsigned long, end_pfn)
+		__field(unsigned long, fin_pfn)
+	),
+
+	TP_fast_assign(
+		__entry->start_pfn = start_pfn;
+		__entry->end_pfn = end_pfn;
+		__entry->fin_pfn = fin_pfn;
+	),
+
+	TP_printk("start_pfn=0x%lx end_pfn=0x%lx fin_pfn=0x%lx ret=%s",
+		__entry->start_pfn, __entry->end_pfn, __entry->fin_pfn,
+		__entry->end_pfn == __entry->fin_pfn ? "success" : "fail")
+);
+
 #endif /* _TRACE_CMA_H */
 
 /* This part must be outside protection */
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 6f5ae96..bda0fea 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -7,6 +7,8 @@
 #include <linux/pageblock-flags.h>
 #include <linux/memory.h>
 #include <linux/hugetlb.h>
+#include <trace/events/cma.h>
+
 #include "internal.h"
 
 static int set_migratetype_isolate(struct page *page,
@@ -268,6 +270,9 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
 						skip_hwpoisoned_pages);
 	spin_unlock_irqrestore(&zone->lock, flags);
 
+#ifdef CONFIG_CMA
+	trace_test_pages_isolated(start_pfn, end_pfn, pfn);
+#endif
 	return (pfn < end_pfn) ? -EBUSY : 0;
 }
 
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 3/3] mm/cma: always check which page cause allocation failure
  2015-11-09  7:24 ` Joonsoo Kim
@ 2015-11-09  7:24   ` Joonsoo Kim
  -1 siblings, 0 replies; 20+ messages in thread
From: Joonsoo Kim @ 2015-11-09  7:24 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Nazarewicz, Minchan Kim, linux-mm, linux-kernel, Joonsoo Kim

Now, we have tracepoint in test_pages_isolated() to notify
pfn which cannot be isolated. But, in alloc_contig_range(),
some error path doesn't call test_pages_isolated() so it's still
hard to know exact pfn that causes allocation failure.

This patch change this situation by calling test_pages_isolated()
in almost error path. In allocation failure case, some overhead
is added by this change, but, allocation failure is really rare
event so it would not matter.

In fatal signal pending case, we don't call test_pages_isolated()
because this failure is intentional one.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 mm/page_alloc.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d89960d..e78d78f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6756,8 +6756,12 @@ int alloc_contig_range(unsigned long start, unsigned long end,
 	if (ret)
 		return ret;
 
+	/*
+	 * In case of -EBUSY, we'd like to know which page causes problem.
+	 * So, just fall through. We will check it in test_pages_isolated().
+	 */
 	ret = __alloc_contig_migrate_range(&cc, start, end);
-	if (ret)
+	if (ret && ret != -EBUSY)
 		goto done;
 
 	/*
@@ -6784,8 +6788,8 @@ int alloc_contig_range(unsigned long start, unsigned long end,
 	outer_start = start;
 	while (!PageBuddy(pfn_to_page(outer_start))) {
 		if (++order >= MAX_ORDER) {
-			ret = -EBUSY;
-			goto done;
+			outer_start = start;
+			break;
 		}
 		outer_start &= ~0UL << order;
 	}
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 3/3] mm/cma: always check which page cause allocation failure
@ 2015-11-09  7:24   ` Joonsoo Kim
  0 siblings, 0 replies; 20+ messages in thread
From: Joonsoo Kim @ 2015-11-09  7:24 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Nazarewicz, Minchan Kim, linux-mm, linux-kernel, Joonsoo Kim

Now, we have tracepoint in test_pages_isolated() to notify
pfn which cannot be isolated. But, in alloc_contig_range(),
some error path doesn't call test_pages_isolated() so it's still
hard to know exact pfn that causes allocation failure.

This patch change this situation by calling test_pages_isolated()
in almost error path. In allocation failure case, some overhead
is added by this change, but, allocation failure is really rare
event so it would not matter.

In fatal signal pending case, we don't call test_pages_isolated()
because this failure is intentional one.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 mm/page_alloc.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d89960d..e78d78f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6756,8 +6756,12 @@ int alloc_contig_range(unsigned long start, unsigned long end,
 	if (ret)
 		return ret;
 
+	/*
+	 * In case of -EBUSY, we'd like to know which page causes problem.
+	 * So, just fall through. We will check it in test_pages_isolated().
+	 */
 	ret = __alloc_contig_migrate_range(&cc, start, end);
-	if (ret)
+	if (ret && ret != -EBUSY)
 		goto done;
 
 	/*
@@ -6784,8 +6788,8 @@ int alloc_contig_range(unsigned long start, unsigned long end,
 	outer_start = start;
 	while (!PageBuddy(pfn_to_page(outer_start))) {
 		if (++order >= MAX_ORDER) {
-			ret = -EBUSY;
-			goto done;
+			outer_start = start;
+			break;
 		}
 		outer_start &= ~0UL << order;
 	}
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] mm/page_isolation: return last tested pfn rather than failure indicator
  2015-11-09  7:24 ` Joonsoo Kim
@ 2015-11-09 22:55   ` David Rientjes
  -1 siblings, 0 replies; 20+ messages in thread
From: David Rientjes @ 2015-11-09 22:55 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Andrew Morton, Michal Nazarewicz, Minchan Kim, linux-mm,
	linux-kernel, Joonsoo Kim

On Mon, 9 Nov 2015, Joonsoo Kim wrote:

> This is preparation step to report test failed pfn in new tracepoint
> to analyze cma allocation failure problem. There is no functional change
> in this patch.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: David Rientjes <rientjes@google.com>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] mm/page_isolation: return last tested pfn rather than failure indicator
@ 2015-11-09 22:55   ` David Rientjes
  0 siblings, 0 replies; 20+ messages in thread
From: David Rientjes @ 2015-11-09 22:55 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Andrew Morton, Michal Nazarewicz, Minchan Kim, linux-mm,
	linux-kernel, Joonsoo Kim

On Mon, 9 Nov 2015, Joonsoo Kim wrote:

> This is preparation step to report test failed pfn in new tracepoint
> to analyze cma allocation failure problem. There is no functional change
> in this patch.
> 
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: David Rientjes <rientjes@google.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/3] mm/cma: add new tracepoint, test_pages_isolated
  2015-11-09  7:24   ` Joonsoo Kim
@ 2015-11-09 22:59     ` David Rientjes
  -1 siblings, 0 replies; 20+ messages in thread
From: David Rientjes @ 2015-11-09 22:59 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Andrew Morton, Michal Nazarewicz, Minchan Kim, linux-mm,
	linux-kernel, Joonsoo Kim

On Mon, 9 Nov 2015, Joonsoo Kim wrote:

> diff --git a/include/trace/events/cma.h b/include/trace/events/cma.h
> index d7cd961..82281b0 100644
> --- a/include/trace/events/cma.h
> +++ b/include/trace/events/cma.h
> @@ -60,6 +60,32 @@ TRACE_EVENT(cma_release,
>  		  __entry->count)
>  );
>  
> +TRACE_EVENT(test_pages_isolated,
> +
> +	TP_PROTO(
> +		unsigned long start_pfn,
> +		unsigned long end_pfn,
> +		unsigned long fin_pfn),
> +
> +	TP_ARGS(start_pfn, end_pfn, fin_pfn),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned long, start_pfn)
> +		__field(unsigned long, end_pfn)
> +		__field(unsigned long, fin_pfn)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->start_pfn = start_pfn;
> +		__entry->end_pfn = end_pfn;
> +		__entry->fin_pfn = fin_pfn;
> +	),
> +
> +	TP_printk("start_pfn=0x%lx end_pfn=0x%lx fin_pfn=0x%lx ret=%s",
> +		__entry->start_pfn, __entry->end_pfn, __entry->fin_pfn,
> +		__entry->end_pfn == __entry->fin_pfn ? "success" : "fail")
> +);
> +
>  #endif /* _TRACE_CMA_H */
>  
>  /* This part must be outside protection */
> diff --git a/mm/page_isolation.c b/mm/page_isolation.c
> index 6f5ae96..bda0fea 100644
> --- a/mm/page_isolation.c
> +++ b/mm/page_isolation.c
> @@ -7,6 +7,8 @@
>  #include <linux/pageblock-flags.h>
>  #include <linux/memory.h>
>  #include <linux/hugetlb.h>
> +#include <trace/events/cma.h>
> +
>  #include "internal.h"
>  
>  static int set_migratetype_isolate(struct page *page,
> @@ -268,6 +270,9 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
>  						skip_hwpoisoned_pages);
>  	spin_unlock_irqrestore(&zone->lock, flags);
>  
> +#ifdef CONFIG_CMA
> +	trace_test_pages_isolated(start_pfn, end_pfn, pfn);
> +#endif
>  	return (pfn < end_pfn) ? -EBUSY : 0;
>  }
>  

This is also used for memory offlining, so could we generalize the 
tracepoint to CONFIG_CMA || CONFIG_MEMORY_HOTREMOVE?

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/3] mm/cma: add new tracepoint, test_pages_isolated
@ 2015-11-09 22:59     ` David Rientjes
  0 siblings, 0 replies; 20+ messages in thread
From: David Rientjes @ 2015-11-09 22:59 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Andrew Morton, Michal Nazarewicz, Minchan Kim, linux-mm,
	linux-kernel, Joonsoo Kim

On Mon, 9 Nov 2015, Joonsoo Kim wrote:

> diff --git a/include/trace/events/cma.h b/include/trace/events/cma.h
> index d7cd961..82281b0 100644
> --- a/include/trace/events/cma.h
> +++ b/include/trace/events/cma.h
> @@ -60,6 +60,32 @@ TRACE_EVENT(cma_release,
>  		  __entry->count)
>  );
>  
> +TRACE_EVENT(test_pages_isolated,
> +
> +	TP_PROTO(
> +		unsigned long start_pfn,
> +		unsigned long end_pfn,
> +		unsigned long fin_pfn),
> +
> +	TP_ARGS(start_pfn, end_pfn, fin_pfn),
> +
> +	TP_STRUCT__entry(
> +		__field(unsigned long, start_pfn)
> +		__field(unsigned long, end_pfn)
> +		__field(unsigned long, fin_pfn)
> +	),
> +
> +	TP_fast_assign(
> +		__entry->start_pfn = start_pfn;
> +		__entry->end_pfn = end_pfn;
> +		__entry->fin_pfn = fin_pfn;
> +	),
> +
> +	TP_printk("start_pfn=0x%lx end_pfn=0x%lx fin_pfn=0x%lx ret=%s",
> +		__entry->start_pfn, __entry->end_pfn, __entry->fin_pfn,
> +		__entry->end_pfn == __entry->fin_pfn ? "success" : "fail")
> +);
> +
>  #endif /* _TRACE_CMA_H */
>  
>  /* This part must be outside protection */
> diff --git a/mm/page_isolation.c b/mm/page_isolation.c
> index 6f5ae96..bda0fea 100644
> --- a/mm/page_isolation.c
> +++ b/mm/page_isolation.c
> @@ -7,6 +7,8 @@
>  #include <linux/pageblock-flags.h>
>  #include <linux/memory.h>
>  #include <linux/hugetlb.h>
> +#include <trace/events/cma.h>
> +
>  #include "internal.h"
>  
>  static int set_migratetype_isolate(struct page *page,
> @@ -268,6 +270,9 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
>  						skip_hwpoisoned_pages);
>  	spin_unlock_irqrestore(&zone->lock, flags);
>  
> +#ifdef CONFIG_CMA
> +	trace_test_pages_isolated(start_pfn, end_pfn, pfn);
> +#endif
>  	return (pfn < end_pfn) ? -EBUSY : 0;
>  }
>  

This is also used for memory offlining, so could we generalize the 
tracepoint to CONFIG_CMA || CONFIG_MEMORY_HOTREMOVE?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/3] mm/cma: add new tracepoint, test_pages_isolated
  2015-11-09 22:59     ` David Rientjes
@ 2015-11-10  0:22       ` Joonsoo Kim
  -1 siblings, 0 replies; 20+ messages in thread
From: Joonsoo Kim @ 2015-11-10  0:22 UTC (permalink / raw)
  To: David Rientjes
  Cc: Andrew Morton, Michal Nazarewicz, Minchan Kim, linux-mm, linux-kernel

On Mon, Nov 09, 2015 at 02:59:39PM -0800, David Rientjes wrote:
> On Mon, 9 Nov 2015, Joonsoo Kim wrote:
> 
> > diff --git a/include/trace/events/cma.h b/include/trace/events/cma.h
> > index d7cd961..82281b0 100644
> > --- a/include/trace/events/cma.h
> > +++ b/include/trace/events/cma.h
> > @@ -60,6 +60,32 @@ TRACE_EVENT(cma_release,
> >  		  __entry->count)
> >  );
> >  
> > +TRACE_EVENT(test_pages_isolated,
> > +
> > +	TP_PROTO(
> > +		unsigned long start_pfn,
> > +		unsigned long end_pfn,
> > +		unsigned long fin_pfn),
> > +
> > +	TP_ARGS(start_pfn, end_pfn, fin_pfn),
> > +
> > +	TP_STRUCT__entry(
> > +		__field(unsigned long, start_pfn)
> > +		__field(unsigned long, end_pfn)
> > +		__field(unsigned long, fin_pfn)
> > +	),
> > +
> > +	TP_fast_assign(
> > +		__entry->start_pfn = start_pfn;
> > +		__entry->end_pfn = end_pfn;
> > +		__entry->fin_pfn = fin_pfn;
> > +	),
> > +
> > +	TP_printk("start_pfn=0x%lx end_pfn=0x%lx fin_pfn=0x%lx ret=%s",
> > +		__entry->start_pfn, __entry->end_pfn, __entry->fin_pfn,
> > +		__entry->end_pfn == __entry->fin_pfn ? "success" : "fail")
> > +);
> > +
> >  #endif /* _TRACE_CMA_H */
> >  
> >  /* This part must be outside protection */
> > diff --git a/mm/page_isolation.c b/mm/page_isolation.c
> > index 6f5ae96..bda0fea 100644
> > --- a/mm/page_isolation.c
> > +++ b/mm/page_isolation.c
> > @@ -7,6 +7,8 @@
> >  #include <linux/pageblock-flags.h>
> >  #include <linux/memory.h>
> >  #include <linux/hugetlb.h>
> > +#include <trace/events/cma.h>
> > +
> >  #include "internal.h"
> >  
> >  static int set_migratetype_isolate(struct page *page,
> > @@ -268,6 +270,9 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
> >  						skip_hwpoisoned_pages);
> >  	spin_unlock_irqrestore(&zone->lock, flags);
> >  
> > +#ifdef CONFIG_CMA
> > +	trace_test_pages_isolated(start_pfn, end_pfn, pfn);
> > +#endif
> >  	return (pfn < end_pfn) ? -EBUSY : 0;
> >  }
> >  
> 
> This is also used for memory offlining, so could we generalize the 
> tracepoint to CONFIG_CMA || CONFIG_MEMORY_HOTREMOVE?

Okay. I will make it enabled on CONFIG_MEMORY_ISOLATION so that
CONFIG_CMA || CONFIG_MEMORY_HOTREMOVE || CONFIG_MEMORY_FAILURE can
get benefit from it.

Thanks.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/3] mm/cma: add new tracepoint, test_pages_isolated
@ 2015-11-10  0:22       ` Joonsoo Kim
  0 siblings, 0 replies; 20+ messages in thread
From: Joonsoo Kim @ 2015-11-10  0:22 UTC (permalink / raw)
  To: David Rientjes
  Cc: Andrew Morton, Michal Nazarewicz, Minchan Kim, linux-mm, linux-kernel

On Mon, Nov 09, 2015 at 02:59:39PM -0800, David Rientjes wrote:
> On Mon, 9 Nov 2015, Joonsoo Kim wrote:
> 
> > diff --git a/include/trace/events/cma.h b/include/trace/events/cma.h
> > index d7cd961..82281b0 100644
> > --- a/include/trace/events/cma.h
> > +++ b/include/trace/events/cma.h
> > @@ -60,6 +60,32 @@ TRACE_EVENT(cma_release,
> >  		  __entry->count)
> >  );
> >  
> > +TRACE_EVENT(test_pages_isolated,
> > +
> > +	TP_PROTO(
> > +		unsigned long start_pfn,
> > +		unsigned long end_pfn,
> > +		unsigned long fin_pfn),
> > +
> > +	TP_ARGS(start_pfn, end_pfn, fin_pfn),
> > +
> > +	TP_STRUCT__entry(
> > +		__field(unsigned long, start_pfn)
> > +		__field(unsigned long, end_pfn)
> > +		__field(unsigned long, fin_pfn)
> > +	),
> > +
> > +	TP_fast_assign(
> > +		__entry->start_pfn = start_pfn;
> > +		__entry->end_pfn = end_pfn;
> > +		__entry->fin_pfn = fin_pfn;
> > +	),
> > +
> > +	TP_printk("start_pfn=0x%lx end_pfn=0x%lx fin_pfn=0x%lx ret=%s",
> > +		__entry->start_pfn, __entry->end_pfn, __entry->fin_pfn,
> > +		__entry->end_pfn == __entry->fin_pfn ? "success" : "fail")
> > +);
> > +
> >  #endif /* _TRACE_CMA_H */
> >  
> >  /* This part must be outside protection */
> > diff --git a/mm/page_isolation.c b/mm/page_isolation.c
> > index 6f5ae96..bda0fea 100644
> > --- a/mm/page_isolation.c
> > +++ b/mm/page_isolation.c
> > @@ -7,6 +7,8 @@
> >  #include <linux/pageblock-flags.h>
> >  #include <linux/memory.h>
> >  #include <linux/hugetlb.h>
> > +#include <trace/events/cma.h>
> > +
> >  #include "internal.h"
> >  
> >  static int set_migratetype_isolate(struct page *page,
> > @@ -268,6 +270,9 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
> >  						skip_hwpoisoned_pages);
> >  	spin_unlock_irqrestore(&zone->lock, flags);
> >  
> > +#ifdef CONFIG_CMA
> > +	trace_test_pages_isolated(start_pfn, end_pfn, pfn);
> > +#endif
> >  	return (pfn < end_pfn) ? -EBUSY : 0;
> >  }
> >  
> 
> This is also used for memory offlining, so could we generalize the 
> tracepoint to CONFIG_CMA || CONFIG_MEMORY_HOTREMOVE?

Okay. I will make it enabled on CONFIG_MEMORY_ISOLATION so that
CONFIG_CMA || CONFIG_MEMORY_HOTREMOVE || CONFIG_MEMORY_FAILURE can
get benefit from it.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] mm/page_isolation: return last tested pfn rather than failure indicator
  2015-11-09  7:24 ` Joonsoo Kim
@ 2015-11-10 16:05   ` Michal Nazarewicz
  -1 siblings, 0 replies; 20+ messages in thread
From: Michal Nazarewicz @ 2015-11-10 16:05 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton
  Cc: Minchan Kim, linux-mm, linux-kernel, Joonsoo Kim

On Mon, Nov 09 2015, Joonsoo Kim wrote:
> This is preparation step to report test failed pfn in new tracepoint
> to analyze cma allocation failure problem. There is no functional change
> in this patch.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Michal Nazarewicz <mina86@mina86.com>

> ---
>  mm/page_isolation.c | 13 ++++++-------
>  1 file changed, 6 insertions(+), 7 deletions(-)
>
> @@ -266,10 +264,11 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
>  	/* Check all pages are free or marked as ISOLATED */
>  	zone = page_zone(page);
>  	spin_lock_irqsave(&zone->lock, flags);
> -	ret = __test_page_isolated_in_pageblock(start_pfn, end_pfn,
> +	pfn = __test_page_isolated_in_pageblock(start_pfn, end_pfn,
>  						skip_hwpoisoned_pages);
>  	spin_unlock_irqrestore(&zone->lock, flags);
> -	return ret ? 0 : -EBUSY;
> +
> +	return (pfn < end_pfn) ? -EBUSY : 0;

Parens aren’t necessary.  No strong feelings.

>  }
>  
>  struct page *alloc_migrate_target(struct page *page, unsigned long private,

-- 
Best regards,                                            _     _
.o. | Liege of Serenely Enlightened Majesty of         o' \,=./ `o
..o | Computer Science,  ミハウ “mina86” ナザレヴイツ  (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>-----ooO--(_)--Ooo--

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] mm/page_isolation: return last tested pfn rather than failure indicator
@ 2015-11-10 16:05   ` Michal Nazarewicz
  0 siblings, 0 replies; 20+ messages in thread
From: Michal Nazarewicz @ 2015-11-10 16:05 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton
  Cc: Minchan Kim, linux-mm, linux-kernel, Joonsoo Kim

On Mon, Nov 09 2015, Joonsoo Kim wrote:
> This is preparation step to report test failed pfn in new tracepoint
> to analyze cma allocation failure problem. There is no functional change
> in this patch.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Michal Nazarewicz <mina86@mina86.com>

> ---
>  mm/page_isolation.c | 13 ++++++-------
>  1 file changed, 6 insertions(+), 7 deletions(-)
>
> @@ -266,10 +264,11 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
>  	/* Check all pages are free or marked as ISOLATED */
>  	zone = page_zone(page);
>  	spin_lock_irqsave(&zone->lock, flags);
> -	ret = __test_page_isolated_in_pageblock(start_pfn, end_pfn,
> +	pfn = __test_page_isolated_in_pageblock(start_pfn, end_pfn,
>  						skip_hwpoisoned_pages);
>  	spin_unlock_irqrestore(&zone->lock, flags);
> -	return ret ? 0 : -EBUSY;
> +
> +	return (pfn < end_pfn) ? -EBUSY : 0;

Parens aren’t necessary.  No strong feelings.

>  }
>  
>  struct page *alloc_migrate_target(struct page *page, unsigned long private,

-- 
Best regards,                                            _     _
.o. | Liege of Serenely Enlightened Majesty of         o' \,=./ `o
..o | Computer Science,  ミハウ “mina86” ナザレヴイツ  (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>-----ooO--(_)--Ooo--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/3] mm/cma: add new tracepoint, test_pages_isolated
  2015-11-09  7:24   ` Joonsoo Kim
@ 2015-11-10 16:06     ` Michal Nazarewicz
  -1 siblings, 0 replies; 20+ messages in thread
From: Michal Nazarewicz @ 2015-11-10 16:06 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton
  Cc: Minchan Kim, linux-mm, linux-kernel, Joonsoo Kim

On Mon, Nov 09 2015, Joonsoo Kim wrote:
> cma allocation should be guranteeded to succeed, but, sometimes,
> it could be failed in current implementation. To track down
> the problem, we need to know which page is problematic and
> this new tracepoint will report it.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

I’m not really familiar with tracing framework but other then that:

Acked-by: Michal Nazarewicz <mina86@mina86.com>

> ---
>  include/trace/events/cma.h | 26 ++++++++++++++++++++++++++
>  mm/page_isolation.c        |  5 +++++
>  2 files changed, 31 insertions(+)

-- 
Best regards,                                            _     _
.o. | Liege of Serenely Enlightened Majesty of         o' \,=./ `o
..o | Computer Science,  ミハウ “mina86” ナザレヴイツ  (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>-----ooO--(_)--Ooo--

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/3] mm/cma: add new tracepoint, test_pages_isolated
@ 2015-11-10 16:06     ` Michal Nazarewicz
  0 siblings, 0 replies; 20+ messages in thread
From: Michal Nazarewicz @ 2015-11-10 16:06 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton
  Cc: Minchan Kim, linux-mm, linux-kernel, Joonsoo Kim

On Mon, Nov 09 2015, Joonsoo Kim wrote:
> cma allocation should be guranteeded to succeed, but, sometimes,
> it could be failed in current implementation. To track down
> the problem, we need to know which page is problematic and
> this new tracepoint will report it.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

I’m not really familiar with tracing framework but other then that:

Acked-by: Michal Nazarewicz <mina86@mina86.com>

> ---
>  include/trace/events/cma.h | 26 ++++++++++++++++++++++++++
>  mm/page_isolation.c        |  5 +++++
>  2 files changed, 31 insertions(+)

-- 
Best regards,                                            _     _
.o. | Liege of Serenely Enlightened Majesty of         o' \,=./ `o
..o | Computer Science,  ミハウ “mina86” ナザレヴイツ  (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>-----ooO--(_)--Ooo--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] mm/page_isolation: return last tested pfn rather than failure indicator
  2015-11-13  2:23 ` Joonsoo Kim
@ 2015-11-24 14:57   ` Vlastimil Babka
  -1 siblings, 0 replies; 20+ messages in thread
From: Vlastimil Babka @ 2015-11-24 14:57 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton
  Cc: Michal Nazarewicz, Minchan Kim, David Rientjes, linux-mm,
	linux-kernel, Joonsoo Kim

On 11/13/2015 03:23 AM, Joonsoo Kim wrote:
> This is preparation step to report test failed pfn in new tracepoint
> to analyze cma allocation failure problem. There is no functional change
> in this patch.
>
> Acked-by: David Rientjes <rientjes@google.com>
> Acked-by: Michal Nazarewicz <mina86@mina86.com>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] mm/page_isolation: return last tested pfn rather than failure indicator
@ 2015-11-24 14:57   ` Vlastimil Babka
  0 siblings, 0 replies; 20+ messages in thread
From: Vlastimil Babka @ 2015-11-24 14:57 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton
  Cc: Michal Nazarewicz, Minchan Kim, David Rientjes, linux-mm,
	linux-kernel, Joonsoo Kim

On 11/13/2015 03:23 AM, Joonsoo Kim wrote:
> This is preparation step to report test failed pfn in new tracepoint
> to analyze cma allocation failure problem. There is no functional change
> in this patch.
>
> Acked-by: David Rientjes <rientjes@google.com>
> Acked-by: Michal Nazarewicz <mina86@mina86.com>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 1/3] mm/page_isolation: return last tested pfn rather than failure indicator
@ 2015-11-13  2:23 ` Joonsoo Kim
  0 siblings, 0 replies; 20+ messages in thread
From: Joonsoo Kim @ 2015-11-13  2:23 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Nazarewicz, Minchan Kim, David Rientjes, linux-mm,
	linux-kernel, Joonsoo Kim

This is preparation step to report test failed pfn in new tracepoint
to analyze cma allocation failure problem. There is no functional change
in this patch.

Acked-by: David Rientjes <rientjes@google.com>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 mm/page_isolation.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 4568fd5..029a171 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -212,7 +212,7 @@ int undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
  *
  * Returns 1 if all pages in the range are isolated.
  */
-static int
+static unsigned long
 __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn,
 				  bool skip_hwpoisoned_pages)
 {
@@ -237,9 +237,8 @@ __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn,
 		else
 			break;
 	}
-	if (pfn < end_pfn)
-		return 0;
-	return 1;
+
+	return pfn;
 }
 
 int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
@@ -248,7 +247,6 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
 	unsigned long pfn, flags;
 	struct page *page;
 	struct zone *zone;
-	int ret;
 
 	/*
 	 * Note: pageblock_nr_pages != MAX_ORDER. Then, chunks of free pages
@@ -266,10 +264,11 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
 	/* Check all pages are free or marked as ISOLATED */
 	zone = page_zone(page);
 	spin_lock_irqsave(&zone->lock, flags);
-	ret = __test_page_isolated_in_pageblock(start_pfn, end_pfn,
+	pfn = __test_page_isolated_in_pageblock(start_pfn, end_pfn,
 						skip_hwpoisoned_pages);
 	spin_unlock_irqrestore(&zone->lock, flags);
-	return ret ? 0 : -EBUSY;
+
+	return pfn < end_pfn ? -EBUSY : 0;
 }
 
 struct page *alloc_migrate_target(struct page *page, unsigned long private,
-- 
1.9.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 1/3] mm/page_isolation: return last tested pfn rather than failure indicator
@ 2015-11-13  2:23 ` Joonsoo Kim
  0 siblings, 0 replies; 20+ messages in thread
From: Joonsoo Kim @ 2015-11-13  2:23 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Nazarewicz, Minchan Kim, David Rientjes, linux-mm,
	linux-kernel, Joonsoo Kim

This is preparation step to report test failed pfn in new tracepoint
to analyze cma allocation failure problem. There is no functional change
in this patch.

Acked-by: David Rientjes <rientjes@google.com>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 mm/page_isolation.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 4568fd5..029a171 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -212,7 +212,7 @@ int undo_isolate_page_range(unsigned long start_pfn, unsigned long end_pfn,
  *
  * Returns 1 if all pages in the range are isolated.
  */
-static int
+static unsigned long
 __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn,
 				  bool skip_hwpoisoned_pages)
 {
@@ -237,9 +237,8 @@ __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn,
 		else
 			break;
 	}
-	if (pfn < end_pfn)
-		return 0;
-	return 1;
+
+	return pfn;
 }
 
 int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
@@ -248,7 +247,6 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
 	unsigned long pfn, flags;
 	struct page *page;
 	struct zone *zone;
-	int ret;
 
 	/*
 	 * Note: pageblock_nr_pages != MAX_ORDER. Then, chunks of free pages
@@ -266,10 +264,11 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
 	/* Check all pages are free or marked as ISOLATED */
 	zone = page_zone(page);
 	spin_lock_irqsave(&zone->lock, flags);
-	ret = __test_page_isolated_in_pageblock(start_pfn, end_pfn,
+	pfn = __test_page_isolated_in_pageblock(start_pfn, end_pfn,
 						skip_hwpoisoned_pages);
 	spin_unlock_irqrestore(&zone->lock, flags);
-	return ret ? 0 : -EBUSY;
+
+	return pfn < end_pfn ? -EBUSY : 0;
 }
 
 struct page *alloc_migrate_target(struct page *page, unsigned long private,
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2015-11-24 14:57 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-09  7:24 [PATCH 1/3] mm/page_isolation: return last tested pfn rather than failure indicator Joonsoo Kim
2015-11-09  7:24 ` Joonsoo Kim
2015-11-09  7:24 ` [PATCH 2/3] mm/cma: add new tracepoint, test_pages_isolated Joonsoo Kim
2015-11-09  7:24   ` Joonsoo Kim
2015-11-09 22:59   ` David Rientjes
2015-11-09 22:59     ` David Rientjes
2015-11-10  0:22     ` Joonsoo Kim
2015-11-10  0:22       ` Joonsoo Kim
2015-11-10 16:06   ` Michal Nazarewicz
2015-11-10 16:06     ` Michal Nazarewicz
2015-11-09  7:24 ` [PATCH 3/3] mm/cma: always check which page cause allocation failure Joonsoo Kim
2015-11-09  7:24   ` Joonsoo Kim
2015-11-09 22:55 ` [PATCH 1/3] mm/page_isolation: return last tested pfn rather than failure indicator David Rientjes
2015-11-09 22:55   ` David Rientjes
2015-11-10 16:05 ` Michal Nazarewicz
2015-11-10 16:05   ` Michal Nazarewicz
2015-11-13  2:23 Joonsoo Kim
2015-11-13  2:23 ` Joonsoo Kim
2015-11-24 14:57 ` Vlastimil Babka
2015-11-24 14:57   ` Vlastimil Babka

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.